This was a really fun project that wasn’t originally in the plan. It came out of brainstorming as I was in the middle of writing the original English press release. Since the drum machine’s user interface evolved into a spaceship cockpit, it only seemed right to continue on the space theme. Because of this, Star Trek kept coming back to mind. Also, as a web developer I often think about internationalization.
So, what if sBASSdrum was not just internationalized but inter-species-alized?
Recalling I had seen fans speaking in Klingon, it seemed like a great fit for this project. The pronunciation is starkly different than English, the glyphs are different, and there’s lots of people who actually speak this seriously. I asked friends on Facebook and received a few leads, but it I think it was the Twitterverse that came through:
After some email coordination I sent him the English press release and a suggestion for the Klingon version which made references to big strong warrior hands, Bat’leths, and fierce sounds. He helped with cultural notes and really polished up the text. Then he went one further and also typed it up in pIqaD. The result: a really cool press release that echoes the fun of sBASSdrum.
The crew at 8:45a, a San Francisco-based cinematography and photography company, has put together a totally fun club-themed video showing off the energy and enjoyment that comes from using the sBASSdrum iPad app whether you are the musician or dancing in the audience. The soundtrack uses only the sounds in the app—the actual song was resequenced using tracker software.
Director and Cinematographer: Cressanne Credo
Editor: Hubert Lamela
BTS Photographer: Jennifer Cabugao
Creative Director: JJ Casas
Soundtrack: Amy Lee / ANI
DJ actor: Qualafox
Filmed on location at Geekdom SF and a San Francisco apartment.
One of the fun things with this project was figuring out what features to put into the interface. I wanted the UI to immediately contain all the controls the user needed so there were no hidden menus or popup panels. (In a future version those will be required, but not now.) The reasoning is especially for kids and novices, digging around menu systems means you spend more time exploring the app than making music. And immediately I wanted the user to begin playing.
But, a cockpit?
Originally it wasn’t. It was just going to be a simple drum machine. I knew I wanted it to have some sort of instantaneous readout of the audio being produced in the system. (I have a fondness for oscilloscopes.) It also needed to have relatively big buttons:
And the more I thought about my favorite synthesizers I found they all basically did this:
Cool-looking 3D-ish interface
I also found that they:
Had way too many buttons
Controls were all over the place
Color palettes are typically fairly monochromatic
What I wanted was something that looked as cool as:
Additionally, I wanted it to be playable by holding the corners of the iPad and playing with just your thumbs.
First came the sketch pad. I almost always start there for any project. I’m a big fan of paper prototypes because you can get ideas out of your head quick, everyone at the table can see what you’re working on, and it has a physicality to it.
The first cut had a lot of buttons all over it. The top 1/4 of it was intended to be the visualizer and effects unit. The overall idea felt good but the buttons were too far apart: I couldn’t hold the iPad by the corners. It also meant there were too few main buttons: playing just one synth on the left was boring.
The next paper prototype attempted to focus just on the drums but the whole idea wasn’t gelling.
Finally, the idea of different instrument panel components coming together began to make more sense. The least-used or ambidextrously-used controls could be in the center, more used buttons could be towards the corners. The other reason to put scopes up top or in the center was because your hands would be covering the bottom corners.
Wireframes in Adobe Illustrator
The paper prototypes felt decent and doable even on an iPad Mini, which is the minimum target platform for this project. Now it was time to begin putting details in and trying to figure out how to make this look more entertaining. Over Thanksgiving break several iterations happened which resulted in almost the same interface that is there today.
It turns out that the buttons were still too big and too far apart. The window also took too long to render with my limited iOS skills. And despite the left hand synth looking cool, there were not enough buttons besides the awkward 45-degree turn of the wrist.
The final interfaces
After roughly tracing the Illustrator vectors in Photoshop, dimension and lighting were added to give it more depth. Ridges appeared, colors got darker, the left panel changed colors to make it distinct from the right one, the synth-switching and drum-switching controls moved farther away from the pads (because they were inadvertently being tapped), and the scopes became smaller. The looping controls at the top were actually not originally going to make the first version of the app but without them the app felt too hard to play. Most people don’t have the coordination to do right and left hand tapping independently, so looping solves a lot of this problem: record some drums or bass, then layer on top, then play with the effects later.
Making the user interface was a very fun and very iterative process. The early designs “failed fast” and I kept putting new designs out for friends to try. The design is still far from perfect and many aspects are overly cartoonish, but the organization feels solid and plans to upgrade all of this with better art are not long in the future.
PHEW. After several months of work the sBASSdrum iPad app is finally launched! I have so many stories to tell and hopefully over the coming weeks I’ll be able to detail a lot of them here.
First, though, I could not have done this without the tremendous help from friends, family, and internet acquaintances. Launching a product is much bigger than one person alone and there is simply not enough time in the world to learn everything to the point of high proficiency.
The first version of the only drum machine to ever be built as a spaceship is now downloadable on the Apple App Store. sBASSdrum (pronounced “space drum”) is a colorful musical instrument that combines both drums and synthesizer sounds together in one interface that requires no setup and curated sound sets to maximize harmony. This demo video shows off the whole suite of features from beat-making, to overdub looping, to the sample-based synthesizer, and finally effects.
The app can be downloaded for free until February 14, and then it will be just $2.99 thereafter.
I will admit: I’m not very good at making snares yet. But perfection can wait for v0.2. 🙂
I’ve been reading over the very excellent Synth Secrets set of articles and learning about all the amazing ways engineers have figured out to make passable drum sounds with electronic equipment. I needed this information because I had never really thought of what a snare drum is and how to recreate one with synthesis. Worse yet, I’m basically working with simplified waveforms and envelopes. So, this insight from Gordon Reid’s writings is really helping.
OK, so what is a snare drum?
A wood shell (blue in the picture)
A top flexible membrane and a bottom flexible membrane (top head, bottom head)
A bunch of snares strapped to the bottom (springs)
How does it work?
You hit the top drum head (the “batter” head)
The air bounces around and forces the lower drum head
The bottom head slaps the snare springs
The snares pop off the back and then slap down, each part of the snare wires coming back into contact with the head again at different rates creating thousands of tiny popping sounds
And you know the result: a “tssshhh” snappy sound with possibly a little tone under that sound.
Let’s take a look at some of my favorite snares:
Simulating this in a drum machine is pretty amazing. If you were to physically model this it would probably take quite the computer to model each of the parts of the springs and the crazy reflections of the tiny snare slaps bouncing off of the sides of the drum head and reflecting back, further rippling back onto the snares themselves, … and so on.
But engineers have figured out a much simpler way of doing this in hardware:
The top head and bottom heads can be simulated with just simple triangle waves
And a noise generator can be used to take place of the snares
The resulting wave form looks like the typical diminishing funnel shape with a strong carrier tone and some noise throughout.
Then, there’s what I decided to do:
Breaking this down:
The 2 left columns are filtered and shaped white noise
The 2 right columns are shaped triangle waves
The first channel is plain white noise with a low-pass that falls off about 8.6KHz so its color is a thick “psshhh”. This is meant to simulate the snares longer ringing against the bottom head. The second channel is also white noise but it is a band-pass that peaks around 2.8KHz and its Q lets in both high frequencies up to maybe 12KHz and down to 300Hz. This one has a much quicker volume decay. The net effect of the two shaped noise waves is there’s a fairly gutteral “kshhh” around 3KHz (which is like the initial slaps of the snares against the bottom head) and has a pretty full frequency range so it sounds crisp. But since the first channel’s noise lasts longer than the second channel’s, it sounds as if someone were slow cranking down a low pass filter over time.
Next, the two triangle waves are tuned similarly to Reid’s tutorial: 180Hz and 330Hz. I make the 330Hz fall of quick and the 180Hz fall off slower. The effect of this that the 330Hz acts like the initial hit of the drum stick against the top head and the 180Hz acts like the main resonant tone of the snare drum.
What’s in a good kick drum? If you ask me, I think it’s:
At least 100ms, probably no more than 200ms
Light, sharp impact sound on the attack
Lowering of frequency over time
Let’s take a look at the raw waveforms some of my favorite kick drums:
So, they all have similar traits. If you look up a spectral graph of one of the kicks it might look like this:
As you can see, the first 0.05 seconds there’s a lot of business going on in all the frequency ranges. After that it seems to hover around somewhere between 100hz down to 50hz in a slow descent. I think why most people do this is because a natural bass drum works pretty similarly:
You step on the pedal
The pedal swings and slaps the bass drum head hard
You get an audible “smack” as the pedal contacts the head and a lot of energy is transferred into the drum
The air bounces around and pushes both the drum head and the drum front back and forth in a big slow oscillation
Then over time the energy is lost and the oscillation slows down along with the volume
Even when synthesizing a bass drum manually it just seems more natural to follow these rules.
So here’s how I made my bass drum. First, this is what it sounds like:
Let’s break this down into the components:
The click: this is faked out slap of the drum pedal against a drum head. Obviously, synthesizers don’t have a pedal or a drum head, but we’re so used to hearing it that it feels weird without it. Plus, it makes it audible on crappy computer speakers. Here I recorded the sound of me punching (yes) a receipt from Target with my LS-12 field recorder.
The initial reverberation: I like to do this with a sine wave that rapidly falls from a high frequency to 0hz. It makes a “pew” sound if it’s slow, or something more like a long “click” if it’s slightly longer. When mixed with the click it makes a nice audible “smack”.
The body: Again this uses a sine wave that falls but it does it slowly. I start it around C4 (261Hz) and let it fall down to C2 pretty quick (131Hz) and then trail off from there.
I made the sine wave in Adobe Audition (just a pure 261Hz wave) and then set it as instruments in Renoise:
I know most people can’t read Renoise format, but it goes like this:
Each vertical column you can think of as an instrument
The left two columns are grouped together
The far left is the “click” with a sine wave at C6 (~1Khz) and rapidly falls to 0Hz linearly
The middle is the “body” with a sine wave at C4 (~261Hz) and initially falls a bit probably down to about 150Hz then slowly falls after that.
The receipt sound is on the 4th track.
The two sine waves’ outputs are further modified by a parametric EQ which tries to accentuate the 100-200Hz range. One thing I’ve learned from testing: you never know what kind of speakers your listener will have and if you have a sine wave that is accentuated in the 60-150Hz range and the frequency is falling through that range, then you’ll most likely get a decent “whomp” out of it. The corollary is that that “click” and “receipt” sounds are much higher in the frequency range (above 300Hz) so on tiny cell phone speakers that have no bass response you will still at least get a small “pop” sound.
Anyways, let’s take a look at the wave form the 3 instruments made together:
Lots of high frequencies in the front, an expanding “whomp” in the middle covering a big spectrum, and most of the energy is out by 0.25 secs.
A follower on Facebook asked how the viz is being done, it’s pretty simple actually, but you have to know about how things flow in the app.
The keys are just controllers. Instead of MIDI notes they fire off messages of “bass1” or “drumPad3”. The routine that handles that has access to the star “size” and bumps it up.
The starfield routine draws the stars with that new size, and over time shrinks the stars back to “normal” size. The stars are randomly added to the starfield and I do some simple trig to figure out what angle from the absolute center they are so as the stars accelerate towards the edges of the screen they at least are coming out radially from the center.
The starfield also clears itself on every frame, except when you push the throttle up. That changes it to darken on every frame by some %. At the top of the throttle it isn’t darkening much so you get to see all the previous frames, hence the zooming look to it.
Speaking of frames, I have a timer that goes off about 20 frames a sec. That timer redraws the starfield. It also tells the scopes to redraw.
The scopes in the left and right are looking at the output buffer for the actual audio being sent out the headphone jack. The left/right scopes are reversed, hence the mirrored pattern. The scopes are 2 separate UIViews and I’ve just told them to draw whatever is in the current audio buffer.
The audio buffer is being handled by Novocaine and I’m currently at about 12 ms, so it’s something like 500 samples.
The center “target” is actually the effects slider and when you drag it around that’s just a button being clipping in a UIView. There’s a permanent low/high-pass filter on the output signal and that effects slider just selects the algorithm and sets the Q and f0 parameters accordingly.
And the volume throttle is a UIView that’s inside another view. I’ve constrained it to only go up/down, so the outer view is working as a track.
Oh, and the whole starship cockpit is one very large PNG with a transparent cutout in the middle for the starfield. The buttons are each a custom UIButton that does things like turn on/off when you press them, or in the case of the upper right drum pad buttons they work as toggles.
Oy, Objective C is kicking my butt. 🙁 It has been so long since I have had to deal with pointers I am very rusty with pointer arithmetic. But tonight was a victory of sorts.
I loaded up a snare PCM sample I’m using as a test sound for a (hopefully) upcoming iOS app. It is a stereo, 32-bit WAV file, 44.1 KHz, and 94636 bytes. The other night I thought my calculations were slightly off, but that’s just because the first 44 bytes are just a WAV header—I’m sure I knew that at some point. So that means each sample frame (a frame having all the channels’ values at a point in time) is 8 bytes: 4 bytes for the left channel, 4 bytes for the right channel. The challenge was to convert this into a normalized format in the iPad’s memory.
So some light math needed to be applied to allocate 2 buffers: one for data coming off of disk and one for the converted sample. The thing is, AVAssetReader reads in chunks. So it took me a bit of searching to find out how to use NSMutableData. A conversion method steps through each sample in the input buffer and writes it to the output buffer but converted to 16-bits. (Yay, bitshifting!)
I did some calculations in Adobe Audition to see if my numbers were sane: it’s 0:00.268 sec long. Checking: (94636 bytes – 44 bytes)/44100/8 turns out to be 0.268117914. Bingo. So the size of the file and playtime are correct. And, since that’s a 32-bit file and I’m turning it into 16-bit in memory, I should be allocating a buffer that is half of 94592 bytes, and it turns out I do in fact come up with the correct number: 47296.