Synthetic Reality Forums
Topic Closed  Topic Closed
Post New Topic  
Topic Closed  Topic Closed
my profile | directory login | register | search | faq | forum home

  next oldest topic   next newest topic
» Synthetic Reality Forums » Android Games » synSpace: Drone Runners » synspace: Drone Runners 1.0.07 Release Notes (Page 3)

  This topic comprises 3 pages: 1  2  3   
Author Topic: synspace: Drone Runners 1.0.07 Release Notes
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
So, I waded back into the waters of 'note velocity'

Mainly because of the chording arpeggiator playing at the same time as you tapping on the keyboard yourself raises relative loudness issues.

So I have re-jiggered things once again, this time to support a separate note velocity per note source (mainly the KB, the arpeggiator, and the LOOP sequences), and I widened the number of bits used to store it from 2 to 4 (whoo! four bits!) giving me 16 levels, which I have very scientifically decided to make 0.75dB steps for about a 12dB range (24dB by some measures) which goes from 'whispery, but not inaudible' to 'just loud enough you're pretty sure you have to turn the volume down'

Before I did a little less of a range, but in only 4 steps, so each step was a huge change. This is actually almost imperceptible from step to step.

Of course, REAL synths use 7 bits, but then they actually measure a velocity, and I am just letting you manually pick a velocity. Except when I get notes from the vocoder. There I turn the actual note energy into a matching velocity.

The only downside so far is that all my old recordings (not that I have many good ones) were done with the old system and are now whisper quiet.

I;m still finding all the corners where the '2 bit' version was baked in. Plus adding some more buttons for setting it.

But I have the actual loudness 'engine' working pretty well.

I'm also adding buttons for the keyboard octave selection. Currently I do that with a control on the GROOVE panel, but it's not always 'up' and this is the sort of thing you really need at performance time, all the time, especially since the kb is so small. So I added a redundant control on the piano itself.

It's primarily an octave shift, but I think I will make a long-press set a 'transpose' offset, so you can easily map the kb to another key signature (allowing you to play in the key of C (easier) while what comes out is something else).

I don't strictly NEEEEED that, and it raises a couple UI questions. It's also somewhat redundant to the master TUNE offset on the FM panel, but that affects all oscillators at the bottom level. A proper transpose only affects the KB itself.

For LOOP steps, I think I will make it so "if you hold one finger on a single loop slot, and then hit a piano key, it will set both the note and velocity for that slot. If, instead, you hit the new Velocity buttons (while touching a loop slot), they modify the existing velocity of that loop slot. I think. Seems most consistent with how I do other things

--------------------
He knows when you are sleeping.

Posts: 10636 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
So, here's a progress video showing the Keyboard Velocity changes.

https://youtu.be/-XGkN_9JLbs

This is loosely edited from four dev sessions, documenting bugs and such. It covers the new keyboard velocity support, colored notes in the sequencer LOOPs, new octave selector, changes to chording arpeggiator and introduction of a sort of 'ham radio' test mode for the alien voice radio chat.

[ 11-06-2018, 06:41 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10636 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I had to have minor 'surgery' today (to scrape off some bad skin (basal cell -- I'm "clear")) which distracted me for much of the week.

But this evening I'm reimplementing the Arpeggiation Pattern Selection UI. Before, I had about 16 canned patterns and you would have to tap up to 16 times to select the one you wanted (more if you went past it).

now, I break that up into 7 banks of 7 patterns each (49 patterns total). To select a pattern you now must tap three times:

* tap the PATT button to enter pattern selection mode
* the I-VII keys turn into the bank 1-7 keys
* tap the bank you want.
* the I-VII keys turn into the bank N patterns
* tap the pattern you want
* display reverts to performance mode (I-VII keys appear again)

Abort any time by tapping the PATT button again. To help remember things, I use a 2 digit id number, like "77" to mean bank 7, 7th pattern.

If you LONG press the PATT button, it opens the pattern editor for the currently selected pattern. In this mode, the I-VI keys turn into a picture of the pattern as it is now, and the VII key becomes a 'delete' button (that deletes the last note in the pattern)

Play keys on the piano to add notes to the end of the pattern.

It's a little weird because of Mode handling, but I think it will be OK.

[ 11-10-2018, 06:23 AM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10636 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Today I did the "Key Signature" selector as a button on the Chord Arpeggiator. It just displays the current key and mode "C Major"

If you tap it, then the bottom row (the I-VII buttons) turn into seven key selection buttons.

Your current key is always displayed in the center (of seven buttons, with three buttons on either side). The side buttons then show the six 'neighbors' of the key, per the "circle of fifths"

So you can't instantly jump to ANY key, but you can jump to any of the six neigbors in one tap, and from there to your real destination in a second tap, worst case.

So, the I-VII buttoons play the seven chords of the selected key, and the key selection buttons select the key. So you can sort of meta-play at that level, with all the underlying arpeggiation stuff changing in real time to key changes (which remap the I-VII buttons, leading to change of the current chord and new notes for the arpeggiation.

I also re-did.. oh, documented that already.

Did some other stuff... I think I'll make a movie

--------------------
He knows when you are sleeping.

Posts: 10636 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Here's another 1.07 progress video

https://youtu.be/Q3fGrIi_C9M

It's reasonably tightly edited, but includes some long passages of 'music' (I tried to craft some music using the arpeggiator, sequencer, transpose, and reverb. And I think it successfully sounds like music in places.)

Basically, the left hand drives a key/chord progression with the arpeggiator, while the right hand just slowly adds 2 note intervals that relate to that chord.

I have my feelings about interval changes in the arpeggiator (which chord (I-VII) to play next is chosen by the same intervals).

Since then, I resumed the 'percussion detection' stage of the music vocoder. This comes after harmonic suppression. It looks for unusually loud and broad energy in the low and high sections of the spectrum. If found, then peaks in these regions are analysed to see if they look 'drummish' or 'hi-hat-ish' and then tries to resolve to a specific percussion instrument (kick drum, toms, snare, and hihat(s)). It doesn't try to do anything fancier than that. If it detects one, in theory, the playback will include the sound (played by the 'sampled sound effect' engine.

--------------------
He knows when you are sleeping.

Posts: 10636 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I haven't blabbered for awhile. Day job keeps me busy in my sloth-like energy pattern. Ostensibly, I am winding up 1.07 for release (this would be a good time for requests)

But in fact, while being a good boy and fixing every bug I notice, I am also still working on the vocoder, both for alien voice, and as a micro-phone based controller input.

And a lot of my work centers on harmonics. (and musical intervals in general). When recording music, I assume I am hearing multiple instruments, each with its own (time-varying) harmonic content.

When recording voice, I assume there is only one speaker, with one fundamental, and some number of harmonics of that fundamental. (per frame)

In both cases, I start by sampling 1/25th of a second of sound (40ms at 16,000 samples per second) and then break that down into a spectrum showing how much energy is present in each of the 128 MIDI note frequencies.

Actually, in voice, I spread the spectrum into slots half that big (quartersteps instead of halfsteps) so I cover only half as many octaves, but to double the precision, and I still retain 'pitch perfect' note centers. (so you can sing while in voice mode, though range is limited)

As recently described, I made some advances in voice where I use the harmonics to help find each other, so I get the 'best solution' for a single spectrum and it finds the real fundamental and fully characterizes the first 8 harmonics.

This provides that 'not easily understood, muddy, speech' which is what I am shooting for (aliens talking to you over ham radio, in their alien languages)

So, then I moved some of that experience back into the music mode harmonic suppressor. In Music mode, I detect the harmonics for mainly two reasons: 1) to get rid of them, to boil the recording down to the actual NOTEs, not the instruments, and 2) to assign NOTEs to individual band members of the groove. (piano vs singer vs bass, for example)

To do this work, I had to improve my visualizations, and added a strict 'opinion' based algorithm, where several simple passes over the spectral energy allows me to form my opinion of each energy peak (at each possible MIDI note location in the spectrum).

So, basically, in the music mode spectral display, for a single spectrum (1/25th of a second), that you can single step back and forth through a multi-second recording... I now show both the energy 'bar' but also, directly below it the opinion 'square'

The color of the square tells you its opinion of that energy peak: it's numeric, but coded using the ROYGBIV color scale.

1 - red - this is the fundamental (the note!)
2 - orange - this is the 2nd harmonic (octave)
3 - yellow - 3rd harmonic
4 - green - 4th harmonic
5 - blue = 5th harmonic
6 - indigo - 6th harmonic
7 - violet - 7th (or higher) harmonic

they are pretty microscopic though. But you can single step through the recording and look at the peaks and see if you agree or not. By doing that with some test data (often by using the synth to loop back into the vocoder so I can use the synth to provide very controlled harmonic relationships)

Anyway, these 'opinions' have gotten really good in voice mode, and getting better in music, but the music case is so much harder (multiple voices, somtimes playing the exact same notes, or my primary note matches the third harmonic of the note that you are playing (pretty common, that's a nice interval))

Also, it's relatively easy to detect notes that are rich in harmonics. It's harder to get the pure sinewaves since it's really hard to know which ones are just harmonics of something else. But basically, since I now form an opinion of every peak, I basically rule out the ones I CAN have an opinion on, then apply a bunch of rules to the peaks remaining.

And those rules tee into the percussion detection module (since basically, percussive events can look like a large number of simultaneous note events)

ANYWAY, last night I cleaned some stuff up and went back to one rule "must have at least 4 solid harmonics" and that is pretty cool. It's like the flute 'disappears' but the gravelly singer voice is heard perfectly.

But the rules for "ok, I will accept fewer harmonics" still tend to let in unwanted sounds now and then. And I literally have to do sinewaves (thank you flutes and recorders). I will probably just demand those sinewaves are 'clear, loud, and above the background noise' And then a sliding scale between those extremes.

The percussion detector itself is educational and fun to watch, but not very musically helpful yet. I still have a pretty poor 'note start' accuracy, and that's pretty much a deal-breaker for music.

In theory, I have the data to determine the true note start msec, but it is delayed by my 'debouncing' of the signal. So by the time I can tell the sequencer about it, the time has already passed. So I have to say "I think I am about 180ms late on this one, please adjust the event time as needed so that the recording will play back correctly."

Which doesn't sound so hard. It's a little harder because of the black box of the mobile device buffering sound in and out, so unless I beep and listen for my own beep and work out the ping round trip time (oh, not the same as knowing the individual in and out delays... hmm..)...

ANYWAY, in theory I could determine that delay as well, and then end up with a good event time in the sequencer.

But, to this point, I do not. There seems to be about a 60% chance that the start of any given note will be delayed by 1/16th note. Note release timing seems better. Though you still want to decode that if the instrument has a long decay and you want to find the actual 'finger lift' moment.

So, that would be an example of something I would like to be working in 1.07 release.

And, of course, the real reason for the delay is I had to switch to Android Studio. And so far, while I have made a zillion debug builds and deployed to all my devices, I have never made a release build with AS, nor have I tried updating a product APK with it. What if it doesn't work? What if I lost my old credentials? What what what?

I should make a list of all the things I am fearful about.

but it's easier to just noodle with sS:DR music.

--------------------
He knows when you are sleeping.

Posts: 10636 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
OK, I bit the bullet and released version 1.07, and I bet the time before I need 1.08 will somehow be instantaneous, even though there are really no users :-)

So I will close this and open the 1.08 notes, and try to compose a cogent list of everything changed in the last 18 months.

* Improved TRAX editor (edits notes of groove)
* Improved synthesizer sampling
* Pretty massive changes to synth (adds FM synth)
* Improved FILTER module in synth (two LFO now)
* Expanded synth to have detail page for each module
* re-did synth NOISE module
* vocoder samples can now be used as synth oscillator waveforms
* oscillator 9 can use 'live vocoder' as waveform
* added option to feed synth output back into vocoder
* added the rhythm rainbow to vocoder (shows notes, chords, keysig, and bpm estimates for current vocoding)
* added "Chording Arpeggiator" with programmable arpeggiation patterns
* added key signature support for cool music stuff
* enhanced sequencer LOOP editing.
* added a first cut 'percussion detector' for the vocoder in music mode -- not very good yet.
* Oscillators may now be individually de-tuned, can be assigned several different waveforms (including vocoder samples) and have a distortion setting.
* reworked vocoder to be very focused on Harmonics (my new friends)
* switched to a ROYGBIV color scheme to convey 'seven values' (like the seven roman numeral chords) and individual notes in the keysig (CDEFGAB)
* added a +/- 9dB preAmp to vocoder
* Added a 'Mouth' simulator for playing back formant=driven speech (but intended to carry the emotions without the words -- Charley Brown Adult Voice) Baby step towards 'in game acoustic emotion sharing without danger of profanity')
* You can now import starmaps either from your Google Drive, or directly from a file you drag to your device over USB.
* adds support for 16 levels of 'note velocity' (loudness), plumbed through the synth, sequencer, and vocoder.
* for Video Documentation of features, please search for "Samsyn2" on YouTube.

--------------------
He knows when you are sleeping.

Posts: 10636 | From: California | Registered: Dec 1998  |  IP: Logged
  This topic comprises 3 pages: 1  2  3   

Post New Topic  
Topic Closed  Topic Closed
Open Topic   Unfeature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:


Contact Us | Synthetic Reality

Copyright 2003 (c) Synthetic Reality Co.

Powered by UBB.classic™ 6.7.3