Thursday, December 20, 2007

Linear Playback

Overview
Scott Selfon

Okay, let's get cracking. The first thing we want to do is be able to play sound. We add variability, dynamism, and the rest later. Let's focus on playing back a wave file.

Note At this point, if you have not already, you must install DirectX 9.0 as well as DirectMusic Producer, which are available on the companion CD.

Waves in Wave Tracks
To play back a wave file in DirectMusic, you'll create a Segment out of it. As already mentioned, the DirectMusic Segment file (.sgt) is the basic unit of DirectMusic production. Segments are built from one or more of DirectMusic's various track types, which can make sound (via stand-alone waves and/or DLS instrument triggering) or modify performances (tempo, chord progression, intensity level, etc.). You can create a Segment out of a wave file in DirectMusic Producer. Run DirectMusic Producer and go to File>Import File into Project>Wave File as Segment…. Open a wave file into the program. Building a Segment from a wave file, we get our first look at one of the basic track types in DirectMusic — the Wave Track. Beyond the 32 variation buttons (which we cover in Chapter 3), Wave Tracks play along the sequencer timeline with other Wave Tracks and also with MIDI sequences.

If you look at the size of the Segment file, it is much smaller than the wave file. Here is an important early lesson in content delivery: Segments will know where the wave and sample files that it needs to play are but do not store those files as part of itself. There are several reasons to do this; if several Segments use the same wave data, you do not have to worry about having two copies of it in memory. In addition, if you later want to go back and edit that wave data, you do not have to worry about copying it into several different places. That said, there are various reasons that you might instead want to embed the wave data within the Segment itself — for instance, for the convenience of only having to deliver a single file or file load time considerations.

Files with the extension ".**p" are design-time files. Design-time files are used in DirectMusic Producer for editing purposes and contain information not necessary for run-time use — for instance, a Segment used in a game does not need to include information on what size and position to open editing windows. In addition, design-time files always reference the content they use, even if you specify that content should be embedded. For these reasons, when content is meant to be integrated into a game or a special player, you should save the Segment as a run-time file (either via the per-file right-click menu Runtime Save As… option or the global Runtime Save All Files option from the File menu). When Segments are run-time saved, you will see the .sgt extension, and wave files will similarly have the more expected .wav extension. So to summarize, be sure to save your Segments as design-time (.sgp) files while you are working on them and as run-time files (.sgt) when they are finished and ready for distribution. For more details on content delivery, there is a white paper available on the Microsoft Developer Network web site (msdn.microsoft.com) called "Delivering the Goods: Microsoft DirectMusic File Management Tips."
Streaming Versus In-Memory Playback
A minute-long 44.1 kHz 16-bit stereo wave file is 10MB. 10MB isn't a big deal when you run a stand-alone DirectMusic file on a contemporary computer, but it is a very large file in the world of game audio. Do not forget that the rest of a game's resources need to reside in memory as well. If the game is written to run on a system with as little as 64MB of RAM, you are already in way over your head. Console games are even more unforgiving. Consider yourself lucky if you get 4MB for your entire sound budget! You can use audio streaming to alleviate these restrictions. Streaming is a technique that works a lot like a cassette player. In a cassette player, audio data is moved across the play head a little bit at a time. The play head reads the data as it comes and plays the appropriate sound. With streaming, a small area in memory called a buffer is created. Wave files are moved, bit by bit, through the buffer and read by DirectMusic. The CD player in your PC uses streaming for playback. If it weren't for streaming, you'd have to load the entire music file into your computer's RAM, which in most cases simply isn't an option.

DirectMusic uses the following rule for streaming: The wave streams with 500 msec readahead if it is more than five seconds long. If it is shorter than five seconds, it resides and plays from memory (RAM). Readahead determines the size of the buffer. For 500 msec, our 44.1 kHz 16-bit stereo wave file will use 88.2KB of memory (.5 sec x 44100 samples/sec x 2 bytes/sample x 2 channels), a big difference when compared to 10MB! Memory usage is reduced by a factor of more than 100! You can override this behavior, choosing to load all wave data to memory, or specify a different readahead value in the Compression/Streaming tab of a wave's property page in DirectMusic Producer. To get to any Track's property page, simply right-click on it in the Track window.
Tracks and Parts and Layers, Oh My
Before we move any further, the concept of Tracks and parts can use a bit of sorting out. When we created our Segment from a wave file, it consisted of a single Track, a Wave Track, with a single part (by default on pchannel 1). Each part specifies its own variation behavior, what pchannel it is played on, and, in the case of Wave Tracks, volume for that part as well. By comparison, Tracks specify big picture (or "global") behaviors such as clock-time, as well as more exotic settings like multiple, concurrent Track groups. If you open the property page for a Track or part, you'll notice the separate tabs with their own settings — the Track properties actually includes both the Track Group Assignments and Flags tabs in addition to the actual Track properties tab.

Remember that waves in Wave Track parts never conflict with DLS Collections on a pchannel; you can play as many waves as you want simultaneously and still play a DLS Collection instrument on the same pchannel without any problem. The reason a Wave Track part can be assigned to a pchannel is that MIDI controllers can still be used to drive wave playback (for instance, pan on a mono wave, pitch bend, volume, etc.). The pchannel that a Wave Track is assigned to can be altered in the wave part's property page.

This brings us to the concept of layers, or the various lettered rows (a, b, c, etc.) that you see displayed in DirectMusic Producer for a single Wave Track part. As mentioned, you can play as many waves at one time as you wish. Therefore, layers are purely an aid for placing waves against the Segment timeline, rather than having many waves overlap on a tiny area of the screen and not be able to easily edit them (or tell which wave started and finished when). Waves can be placed on different layers within a part for easier legibility. All layers are equal, are played simultaneously, and do not take any additional resources (processor power or memory) when played.


Figure 2-1: Multiple waves on Wave Tracks are split into "layers" so you can see their beginnings and endings more easily.
As a last bit of terminology for now, certain parts are actually subdivided even further into strips. In particular, the parts for MIDI-supporting Tracks (Pattern Tracks and Sequence Tracks) have separate strips for note information and continuous controller information. Pattern Tracks also have an additional variation switch point strip, which we cover in Chapter 3 when we discuss variations.
MIDI and DLS Collections
Creating a DirectMusic Segment from a piece of MIDI music is simple. In DirectMusic Producer, follow File>Import File into Project>MIDI File as Segment…. This creates a DirectMusic Segment from a MIDI file, importing note, MIDI controller, and other pertinent data along the way. Let's examine some new Track types related to MIDI:

Tempo Track: Specifies the current tempo for the piece of music. You can override this by playing a primary or controlling secondary Segment with its own tempo settings.

Time Signature Track: Sets the time signature for the piece. Use this to track where measures fall as well as how many subdivisions (grids) to give each beat.

Chord Track: Specifies the key as well as specific chord progressions for a piece of music. Typically, for an imported MIDI file, this will just indicateaCmajor chord. DirectMusic Producer does not attempt to analyze the imported MIDI file for chordal information.

Sequence Track: A sequence is what its name implies. Sequence Tracks house MIDI sequences. This is where the majority of MIDI information is imported. Notice that as with Wave Tracks, Sequence Tracks can (and typically do) consist of multiple parts. By default, each MIDI channel is brought in as a separate part on the corresponding pchannel. In addition, each part can contain its own continuous controller information. Unlike Pattern Tracks (more on these in Chapter 3), Sequence Tracks are linear, non-variable sequences; they play the same every time, and they do not respond to chord changes (though they do respond to tempo changes).


Figure 2-2: A Segment with a Sequence Track consisting of multiple instruments.

Band Track: Bands are how Performance channels refer to DLS instrument collections. The Band Track is an area where initial pan, volume, and patch change settings for Tracks are stored. This is often an area of confusion, as you can also have volume and pan controller data within Sequence Track (or Pattern Track) parts. The Band Track typically just has a single Band with the initial settings. Subsequent volume changes are typically created as continuous MIDI controller events in Sequence or Pattern tracks. Continuous controller events, unlike band settings, allow you to easily sweep values over time (for example, to fade a track in or out). If any patch changes occur during performance, another Band can be placed in the Band Track, though the elimination of MIDI channel limitations means there is often little reason not to just place each instrument on its own unique and unchanging channel.

Building DLS Collections
Our Segment is now ready for playback. Of course, we're assuming that the piece of music will be played using a preauthored DLS Collection, such as the gm.dls collection that comes with Windows machines. Otherwise, it will be trying to play instruments that don't exist on the end user's machine, and these instruments would be played silently. If we wanted to use our own instruments, we would want to build one or more DLS Collections. DirectMusic Producer provides this authoring ability in DLS Designer. Alternatively, the DLS-2 format is a widespread standard, and there are several tools out there for converting from other common wavetable synthesizer formats to the DLS format.

Creating DLS Collections is often one of the more challenging tasks when you decide to create real-time rendered (versus streamed prerendered) audio. Remember that unlike streamed audio, your DLS Collection will occupy system memory (RAM), which is typically one of the most precious commodities for an application. For this reason, you'll want to create collections that get you the most bang for your buck in terms of memory versus instrument range and quality.

Let's create our first DLS Collection. From the File menu, select New, and from the dialog that appears, choose DLS Collection and hit OK. We're presented with a project tree entry for our DLS Collection (using the design-time .dlp file extension), which has two subfolders, Instruments and Waves.


Figure 2-3: An empty DLS Collection in the project tree.
DLS Collections are composed of waves (effectively, ordinary wave files) and instruments, which consist of instructions for how waves should map to MIDI notes, along with a fairly sophisticated set of audio-processing features. Let's add a few waves for our first instrument, a piano. We can drag our waves right into the Waves folder from a Windows Explorer window or right-click on the Waves folder and choose Insert Wave… from the menu that appears.


Figure 2-4: We've inserted four wave files into our DLS Collection.
We can now adjust the properties for these waves by right-clicking on them and choosing Properties to bring up their property window. The most important settings to note are going to be Root Note, loop points (both in the Wave tab), and compression settings (in the Compression/Streaming tab). We'll return to compression a bit later. Root Note specifies the base note that this wave corresponds to, also known in other wave authoring programs as the "unity pitch" or "source note." For our above piano sounds, determining the root note was made easier by including the root note information right in the wave file names. DirectMusic Producer will also automatically use any root note information stored with the wave file, which some wave authoring tools will provide. Otherwise, we can adjust the root note manually from the property page, using either the scroll arrows or by playing the appropriate note on a MIDI keyboard.


Figure 2-5: Setting the proper root note for our BritePiano_C4 wave.
This particular piano wave doesn't loop, so we don't need to worry about the loop settings. Again, if loop settings had been set on the source wave file, DirectMusic Producer would automatically use that information to set loop points.

DirectMusic Producer provides some basic editing features on waves in the Wave Editor, which can be opened by double-clicking any wave in the Waves folder (or indeed, any separate wave file you've inserted into your DirectMusic project).


Figure 2-6: The Wave Editor. You can specify whether waveform selections (made by clicking and dragging) should snap to zero crossings via Snap To Zero, and you can specify a loop point by selecting Set Loop From Selection.
The Wave Editor window supports clipboard functions, so you can copy to and paste from other wave editing applications if you wish. You can also try out any loop points you've set (or just hear the wave you've created) by playing the wave. As with Segment playback, you can use the Play button from the Transport Controls toolbar, or use the Spacebar as a shortcut to audition your wave. Note that you can only start playing your waves from the beginning, regardless of where you position the play cursor, as DLS waves cannot be started from an offset.

Now that we've set up our waves, let's add them to a DLS instrument so we can play them via MIDI notes. To create an instrument, right-click on the Instruments folder and choose Insert Instrument.


Figure 2-7: Creating an instrument.
You'll notice that each instrument is assigned its own instrument ID, a unique combination of three values (MSB, LSB, and Patch) that allows for more than 2 million instruments — a bit more freeing than the traditional 128 MIDI patch changes. DirectMusic Producer will make sure that all loaded DLS Collection instruments use unique instrument IDs, but you should take care if you author your DLS Collections in different projects to make sure that they don't conflict. Otherwise, if two instruments with identical IDs are loaded at the same time, DirectMusic will have no way of knowing which one you want to use. The General MIDI DLS Collection (gm.dls) that is found on all Windows machines uses 0,0,0 through 0,0,128, so our new instrument probably defaulted to instrument ID (0,1,0).

Let's open up the Instrument editor by double-clicking on the instrument in the project tree.


Figure 2-8: The Instrument editor.
There are lots of options here that really demonstrate the power of a DLS-2 synthesizer, but for now let's just start with our basic instrument. The area above the piano keyboard graphic is where we define which wave or waves should be played for a given note. Each wave assigned to a range of notes is called a DLS region. As you can see, our new instrument defaults to a single region, which plays the wave BritePiano_C5, over the entire range of MIDI notes. The piano keyboard note with a black dot on it (C5) indicates the root note for the region. Remember that the root note will play the wave as it was authored, notes above the root note will pitch the wave up, and notes below the root note will pitch the wave down.

Looking along the left edge of our region, you can see the labels DLS1, 1, 2, and so on. These labels identify the various layers of this instrument. Layers allow you to create overlapping regions, one of the nice features of the DLS-2 specification. One of the more common uses is for adding multiple velocity versions of a wave, where as the note is struck harder, the instrument will switch to a different wave file. This is particularly effective for percussion instruments. Notice that each region has a velocity range that can be set in the Region frame. Multiple layers also mean that a single MIDI note could trigger a much more complex instrument composed of several simultaneously played wave files. Remember that regions on a single layer cannot overlap. The DLS1 layer is the single layer that was supported in the original DLS-1 specification, and is generally still used to author any single-layer DLS instruments. For this simple instrument, we won't worry about using multiple layers for the moment.

Our first step is to create regions for the rest of our piano waves. We want to resize that existing region so there is room on the layer for the other waves (that, and we probably don't want to try to play this wave over the entire range of the MIDI keyboard!). If you move the mouse over either edge of the region, you'll see that it turns into a "resize" icon, and you can then drag in the outer boundaries of the region.


Figure 2-9: Resizing a region with the mouse. Alternatively, we could change the Note Range input boxes in the Region frame.
By the way, this is similar to the functionality for note resizing (changing start time and duration) in the MIDI-style Tracks (Pattern Tracks and Sequence Tracks). There are other similarities between region editing and note editing — the selected region turns red, just as notes do. The layer we are currently working with turns yellow, just as the current Track does in the Segment Designer window. And just as you can insert notes by holding the Ctrl key while clicking, you can create a new region in the same way.


You can resize the region as long as the mouse is held down and dragged (or, as before, resize the region by grabbing a side edge of it). As an alternative to drawing a region with the mouse, you can right-click and choose Insert Region from the menu that appears.

Every region that we create defaults to use the same wave that the first region did. So we'll want to choose a more appropriate wave from the drop-down aptly labeled Wave. Let's assign BritePiano_E5 to this new region we've created.


We repeat the process for the rest of our waves to build up our instrument. Notice as you click on each region that you can view its particular properties for wave, range, root note, etc., in the Region frame.


Figure 2-10: The completed regions for our piano instrument.
There are several potential issues to discuss here. First, should all instruments span the entire keyboard? On the plus side, the instrument would make a sound no matter what note was played. On the minus side, when the source wave is extremely repitched (as BritePiano_C4 is in the downward direction and BritePiano_E5 is in the upward direction), it can become almost unrecognizable, dramatically changing timbre and quality. Most composers opt for the latter, if only to know whether their music ever goes significantly out of an acceptable range for their instruments. If it does, the music can be adjusted (or the DLS instrument expanded to include additional regions).

A second question is how far above and/or below the root note a region should span. That is, does the wave maintain the quality of the source instrument more as it is pitched up or down? This can vary significantly from wave to wave (and the aesthetic opinions of composers also differ quite a bit). For the above example, we did a bit of both — the bottom three regions encircle notes both above and below the root note, while the highest piano region (our BritePiano_ E5 wave) extends upward from the root note. How far you can "stretch" a note can vary quite significantly based on the kind of instrument and the source waves.

Once you've created your instrument, you can try auditioning it in several ways. An attached MIDI keyboard will allow you to trigger notes (and indeed, small red dots will pop up on the keyboard graphic as notes are played). You can also click on the "notes" of the keyboard graphic to trigger the note. The Audition Options menu allows you to set the velocity of this mouse click and choose whether to audition all layers or only the selected one. (In the case of our example, we only have one layer, so the two options don't matter.)


Try out the piano instrument, paying particular attention to the transitions between regions. These are often difficult to match smoothly, as the pitch shifting of the waves begins to alter their characteristics and the instruments can begin to develop musical "breaks" where the tone quality changes significantly. If you are writing a purely tonal piece of music, it is often useful to author your DLS Collections such that the tonic and dominant tones (the scale root and the 5th degree) are the root notes of your regions. That way, the "best" version of each wave (not repitched) will be played for what are typically the most commonly heard tones in the music.

If you do need to tweak the source waves somewhat, you can either reauthor them and paste over the previous version in the wave bank, or you can make manual edits to how the regions play particular waves. The latter is useful when a wave is just slightly mispitched when played against another wave or when the levels of two recorded waves differ slightly. To override the wave's properties for this region, open the region's property window (by right-clicking and choosing Properties or, as with other property windows, by clicking on the region when the property window is already open).


Figure 2-11: A DLS region's property window. Here we've slightly adjusted the fine-tuning for BritePiano_G4 only as this region plays it back. If any other regions or instruments used the wave, their pitch would be unaffected by this change.
Note One common question is whether using overlapping regions on multiple layers could be used to more smoothly crossfade between waves. Unfortunately, all regions are triggered at the same velocity, so there is no easy way to make one region play more quietly than another for a given MIDI note. With some effort, you could potentially create several smaller regions over the transition that attenuate the source waves, gradually bringing their volume up (by overriding attenuation in the region's property window) to complete the crossfade.


And that's enough to create our basic instrument. If we wanted to, we could adjust any of a number of additional properties on the instrument's articulation, which is where much of the power of a DLS-2 synthesizer lies. The articulation allows you to specify volume and pitch envelopes, control a pair of low-frequency oscillators (LFOs), and set up low-pass filtering parameters for the instrument. You can even create per-region articulations, where a region has its own behavior that differs from other regions of the instrument. We'll just set up the instrument-wide articulation for this piano. Since our original sample already has most of the volume aspects of the envelope built in, we'll just add a release envelope. This means that when a MIDI note ends, the instrument will fade out over a period of time rather than cutting off immediately, much like an actual piano. By dragging the blue envelope point to the left in the Instrument Articulation frame, we have set the release to be .450 seconds, so this instrument's regions will fade out over roughly a half-second when a MIDI note ends.


Figure 2-12: To add any per-region articulations (presumably different from the global instrument articulation), right-click on the region and choose Insert Articulation List.
Our basic piano instrument is now complete. We could open up the instrument's property window (as always, by right-clicking on it and choosing Properties) and give it a better name than Instrument, such as MyPiano. We then repeat the process for other instruments in our collection. Again, remember that DLS Collections will be loaded into memory when a Segment is using them, so you'll want to budget their size according to available memory on the system on which they will be played.

Stereo Waves in DLS Collections
One unfortunate omission from the DLS-2 specification is that stereo waves are not supported for regions. DirectMusic Producer works around this by using DLS-2's aforementioned support for multiple layers. When a stereo wave is used in a DLS region, DirectMusic Producer creates a hidden second region, separates the two source channels out, and plays the mono pair of channels in sync on the two regions' collection.


Figure 2-13: The top single stereo region is actually stored by DirectMusic Producer as something closer to the bottom pair of mono regions.
All of this is transparent to the composer (and to any content that uses the DLS instrument) — they can use stereo waves the same as mono waves without having to do anything differently. But it does have several impacts on content authoring. The primary implication of DLS-2 not supporting stereo waves is that stereo waves cannot be compressed in DirectMusic Producer. Otherwise, DirectMusic would have to somehow figure out how to separate the left and the right channels to place them in separate mono regions. If you do intend to use stereo waves and they must be compressed, you must author them as pairs of mono waves, create regions on two layers, and then set those regions' multichannel properties to specify the channels and phase locking of the waves (so they are guaranteed to always play in sync).


Figure 2-14: Making our piano's right channel wave play in sync with the left channel and on the right speaker. Both channels would be set to the same Phase Group ID, and one of them should have the Master check box checked.
Using DLS Collections in Segments
Now that we've assembled our DLS Collection, our Segment needs to use that collection's patch changes in order to play its instruments. Remember that patch change information is stored in a Band in the Band Track. If we created a MIDI-based Segment from scratch, we would insert a Band Track (right-click in the Segment Designer window and choose Add Tracks…), and then insert a Band into the Track (by clicking in the first measure and hitting the Insert key).

However, let's assume for our very first piece of music that we just imported a MIDI file (from the File menu, choose Import File into Project…, then MIDI File as Segment…). In this case, we already have a Band created for us containing the patch changes from the original MIDI file.

We now want to edit this Band to use our new DLS instruments (rather than the General MIDI ones, or worse, instruments that don't exist).


Figure 2-15: Double-click the Band in the Band Track to open the Band Editor.

Figure 2-16: The Band Editor window.
Remember that in addition to patch change information, you can set up the initial volume and pan for each channel. The grid at the right allows you to do this graphically (vertical corresponding to volume, horizontal corresponding to left-right pan) by grabbing a channel's square and dragging it. Alternatively, you can set up each Performance channel by double-clicking on it (or right-clicking and choosing Properties).

The properties window is where you set the instrument properties for the channel.


Figure 2-17: The Band Properties dialog box.
An instrument's priority allows DirectMusic to know which voices are the most important to keep playing in the event you run out of voices. Volume and Pan are once again the same functionality seen in the Band Editor's right grid. Transpose lets you transpose all notes played onto the channel ("Oct" is for octaves and "Int" is for intervals within an octave). PB Range lets you control how far the wave will be bent by Pitch Bend MIDI controllers.

Range is a somewhat interesting option. It instructs DirectMusic that we will only play notes within a certain range, and therefore that we only need the regions from this instrument that are in that range. While this can cut down on the number of DLS instrument regions that are in memory (and thus possibly the size of wave data in memory), it does mean that notes played outside of this range will fall silent. Because transposition, chord usage, or other interactive music features might cause our content to play in a wider-than-authored range, Range is generally not used (and thus left unchecked).

Getting back to the task at hand, this channel is currently using General MIDI patch 89 (remember that the other two aspects of an instrument ID, the MSB and LSB in the DLS Collection, are typically zero for General MIDI collections). We want it to instead play our new instrument, so we click on the button displaying the name of the current instrument, choose Other DLS… from the drop-down that appears (the other options are all of the instruments from gm.dls), and select our DLS Collection and instrument to use.


Figure 2-18: The Choose DLS Instrument dialog box.
If we now play our Segment, Performance channel one will use our newly created piano instrument.
Authoring Flexible Content for Dynamic Playback
Playing back long MIDI sequences isn't going to be particularly interactive, so we might want to take this opportunity to consider alternative solutions to strictly linear scores. For instance, you might want to "chop up" your score into smaller pieces of music that can smoothly flow into each other. Authoring music in this kind of segmented (no pun intended) manner can be as simple as creating a bunch of short looping clips, each of which follows the same progression, perhaps deviating in terms of instrumentation and ornamentation. Alternatively, the various sections could be much more distinct, and you could author transitional pieces of audio content that help "bridge" different areas of the music. In every case, you need to consider the balance between truly "interactive" audio (that is, music that can jump to a different emotional state quickly) and musical concepts such as form, melodic line, and so on. That is, how can a piece of music quickly and constantly switch emotional levels and still sound musically convincing? Finding a balance here is perhaps one of the most significant challenges in creating dynamic audio content,
Transitions
Now that we've got basic linear pieces of music, let's discuss moving from one piece of linear music to another. How do we move from one piece of music (the "source") to the next (the "destination")? DirectMusic provides an abundance of choices in how such transitions occur. You can specify a Segment's default behavior in its property page (right-click on the Segment) on the Boundary tab. On the positive side, using these settings means you don't need to tell a programmer how your musical components should transition between each other. On the negative side, this locks a Segment into one specific transition behavior, which is often not desired; it's quite common to use different transition types based on both the Segment that we're coming from and the Segment that we're going to. Overriding can be a chore, particularly when using DirectX Audio Scripting. More often than not, audio producers end up having to communicate their ideas for transitions directly to the project's programmer for implementation. This is of course if you're working on a game. If you are creating stand-alone music for a DirectMusic player, stick to what you've got in terms of transition types!

Looking over the boundary options, there are quite a few that are fairly straightforward — Immediate, Grid (regarding the tempo grid; remember that the size of a grid can be specified in the Time Signature Track), Beat, and Measure. End of Segment and End of Segment Queue are also fairly easily understood — the former starting this Segment when the previous one ends and the latter starting this Segment after any other Segments that were queued up to play when the current Segment ended.

Marker transitions introduce us to another Track type that we can add to an existing Segment, aptly named the Marker Track. After adding a Marker Track to a Segment, an audio producer can add markers (also known as exit switch points) wherever it is deemed appropriate in the Segment, which will postpone any requested transition until a specific "legal" point is reached. Just to clarify, the transition is specified on the Segment that we're going to, even though the boundary used will be based on the currently playing Segment that we're coming from.

Transitions do not have to occur directly from one piece of music to another. The content author can create specific (typically short) pieces of music that make the transition smoother, often "ramping down" or "ramping up" the emotional intensity, altering the tempo, or blending elements of the two Segments. A programmer or scripter can then specify a transition Segment be used in conjunction with any of the boundaries discussed above.

As a more advanced behavior, when using Style-based playback, the programmer or scripter can specify particular patterns from the source and/or destination Segments (for instance, End and Intro, which plays an ending pattern from the source Segment's Style and an intro pattern from the destination Segment's Style) to similarly make the transition more convincing. We cover these embellishment patterns more in depth when we get into Style-based playback in Chapter 4.

By default, transitions go from the next legal boundary in the source Segment to the beginning of the destination Segment. There are, of course, cases where we might want the destination Segment to start from a position other than the beginning. Most commonly, we could intend for the destination Segment to start playing from the same relative position, picking up at the same measure and beat as the source Segment that it is replacing.

For this kind of transition, alignment can be used, again in conjunction with the various boundaries and transition Segment options above. Align to Segment does exactly what we outlined above. As an example, if the source Segment is at bar ten beat two, the destination Segment will start playing at its own bar ten beat two. Align to Barline looks only at the beat in the source Segment and can be used to help maintain downbeat relationships. Using the same example as above, the destination Segment would start in its first bar but at beat two to align to barline with the source Segment. As another technique for starting a Segment at a position other than its beginning, enter switch points (specified in the "lower half" of Marker Tracks) can be used. In this case, whenever a transition is intended to occur, the destination Segment will actually force the source Segment to keep playing until an appropriate enter switch point is reached in the destination Segment. Alignment generally provides sufficiently satisfactory transitions with less confusion, so enter switch points are rarely used. Remember in both cases that the limitations outlined earlier for pause/resume apply here; starting a Segment from a position other than the beginning will not pick up any already sustained MIDI notes, though it will pick up waves in Wave Tracks, as well as subsequent MIDI notes.

No comments: