Synthetic Reality Forums
Topic Closed  Topic Closed
Post New Topic  
Topic Closed  Topic Closed
my profile | directory login | register | search | faq | forum home

  next oldest topic   next newest topic
» Synthetic Reality Forums » Android Games » synSpace: Drone Runners » synspace: Drone Runners 1.0.09 Release Notes (Page 1)

  This topic comprises 2 pages: 1  2   
Author Topic: synspace: Drone Runners 1.0.09 Release Notes
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
These are the release notes for the 1.09 release of synSpace: Drone Runners, a space game for Android that supports player-created starMaps, shipShells, pilotFaces, synthPatches, sequencerGrooves, and alien Ham Radio via the vocoder.

Available in the Google Play Store for phones and tablets, and on Amazon for Kindle Fire HD devices

Summary:

synSpace: Drone Runners
v1.09 Release Notes

GENERAL NOTES:

This turns out to be a bugFix release and not the full 1.09 release I anticipated, so you will find many incomplete implementations if you wander into 3D OnPlanet mode. Plus the frame rate on planet will likely be too low to be playable on anything but the fastest devices (I can get about 15 fps in 3D with a this-year Kindle Fire HD 10", but only about 5 fps with a one year old Samsung Galaxy TAB A. -- in general, the Samsung devices are very slow for some reason. For example "Math.random()" on the samsung takes 100 times longer than on a Kindle. (a bit of handwaving there, but it's bad). I think it's probably due to thread locking in their implementation of the random() function. Something to be addressed in the next release, however, so I am just warning you that 3D mode is not really ready for prime time yet (and also requires a matching starmap or, for example, once on planet you might not be able to find a way off of it (if you long-press STOP a few times, it will eventually take you back to the thumb-selecting screen)

Full development notes are here: http://synthetic-reality.com/cgi-bin/ultimatebb.cgi?ubb=get_topic;f=20;t=000017;p=1

TRACTOR BEAMS AND TURRETS

Shell ships now have optional turrets (platforms that can be aimed separately from the ship, by wiggling the 'trigger' control left and right. But this is incomplete and you should not encounter it yet, but let me know if you do (and it's broken!)

MUSIC SYSTEM CHANGES

* Vocoder has new rhythm detection algorithm
* You can now (with a host adapter) connect a USB Music Keyboard to your device and play with 'real' keys (but there is still 'note lag')
* TRACK EDITOR can now be dragged further to the left than the first note
* new OPTION lets you start a recording, that is then paused until you hit the first note
* new FM Percussion support (the FM synthesis has been improved and can now handle percussion)
* TRACK EDITOR now has an 'overlay' showing the vocoder note energy, 'on top off' the piano roll notes, so you can see how musical energy was decoded. Mainly for me, but it's sort of cool to look at.
* the 15 LOOPs now have a new look and can be linked (this one finishes, and starts this other one). Full description video: https://youtu.be/5k-MnaajWSQ
* Added support for stereo output from the music system (you can't control the position yet, I just place notes based on their pitch - low to the left, high to the right)

PERMISSION CHANGES

This is actually the reason I have to push 1.09 at this time -- android changed its permission system, so devices with newer Android (which I think is pretty much 'all' at this point) have more stringent requirements for getting permission to use the internet, file system, and microphone (the things of interest to synSpace) and 1.09 adapts to the new standards. 1.09 also raised the 'minimum SDK' setting of the app, which means in theory it no longer works on very old versions of android -- but I think those devices were probably too slow to handle it anyway.

VQ BRAINS

I read a book by Ray Kurzweil that descibed Vector Quantization (which was news to me) and I love it, and did my own little implementation which I am using where a real program might use a neural network. Specialized 'brains' that only answer a single question (like "should I run away from this critter, or attack it?" (fight or flight), and the decision is made by comparing current sense data (what the critter is 'smelling' at the moment) to the 'engrams' inside of a VQ 'brain'. Each 'engram' is basically a memory of an event experienced in game by the critter (so it can be described as 'learning'). It remembers the 'smell' and the 'action taken' and the 'result' and depending on whether the result is something the critter liked or not, turns each engram into a 'vote' for one choice (fight) or the other (flight) and the final votes determine what the critter does.

I also use a VQ brain in the vocoder to try to track individual voices (these notes all came from 'the horn' while these others came from 'the piano' (since in a vocoding, all the notes of all the instruments arrive in a single channel without annotation, and I sort of want to make up a little 'band' of players, then I can re-assign their instruments individually later, for playback. Plus it looks cool.

I have another VQ brain dedicated to detecting 'snare, kick, and hiHat' (but it is not very good yet)

CRITTERS

I've always wanted to make a 'stick figure skeletal pose animator' and now I have. It's hidden inside the "Super Funpak Power Show" option inside the settings panel. You have 8 workslots where you can develop up to 8 'critters' (eventually I will add a CLONE button so you can save them more permanently as official assets), but for now you can define a skeleton, drag it into POSEs and then form ANIMATIONs as sequences of poses.

Critters are only used 'on planet' and will be a big part of the next generation of StarMaps

Many example videos are available, but here's one from early one: https://youtu.be/N1JhZJ57-FE

3D WORLDS

Classic synSpace takes place in 'grid space' which is a flat plane of 'space' with little stars and barriers and vector-based 'shell ships'. There are also planets which historically were just navigation hazards (their gravity sucking your ship into its doom)

But now, on suitable starmaps, if you 'dive your shell ship into the atmosphere of the planet', you end up (if you survive) in a new 'orbital' mode, high over the surface of a 2D planet.

From there, you can descend to the surface of the planet (wearing a critter now, not a shell ship) and walk around amongst the 3D fractal terrain and the up-to-one-million synchronized flora and fauna (so everyone sees the same stuff). At this point you can basically just look at the world and not interact much with the objects.

FRACTAL TERRAIN

The 'dirt' of the planet is represent by a 256x256 'elevation map' which is filled randomly, using a 'random seed' which is shared (by the starmap) to all players, so they can all individually generate the same shape of terrain without having to send large amounts of data (same for the Flora system). For each of the 65,536 'terrain cells', I have some meta data to run things like the climate system, remember a surface normal so things can slide downhill, etc.

This is a 'canvas' app and not a full-on GPU-driven 3D graphics app, so the 3D stuff is all faked with the canvase.drawVertices() function, which is pretty cool, but much slower than using the GPU. It is only my perverse attitudes that make me stick with this for now (and the low frame rates it leads to), but it feels more 'genuine' in my attempt to recreate 3D graphics from the early 90s.

An early image: http://www.synthetic-reality.com/drone/critter11.png
A more recent image: http://www.synthetic-reality.com/drone/flora1.png
one with ocean and plants and such: http://www.synthetic-reality.com/drone/flora1.png
and a two hour video: https://youtu.be/UMoO-w2j00c

CLIMATE SIMULATION

I added a simple climate system that distributes heat from the local star into atmosphere pressure, leading to clouds and rain and snow. Something to look at in the sky, and resources to be collected.

FLIGHT SIMULATOR

I added a simple flight simulator system. The starmap provides a list of numbers that describe the 'wings and control surfaces' of a personal flying machine (current example is the 'cessna overcoat' which a critter can 'wear' and then gives the flight performance of a small prop plane. This required adding some extra controls (throttle, flaps, ..) which only appear when you have one of these equipped on your critter.

But this has become my new favorite thing to do. Some development footage: https://youtu.be/a9qAmCydsus

FLORA SYSTEM

This started out as a means for placing plants and trees that everyone would agree were in the same spot. That resulted in a 'location' system where I create up to a million locations distributed evenly across the map, and for each location, using rules and probabilities from the starmap, the engine creates a random location and some random details for that specific object.

Example of what that sort of looks like: http://www.synthetic-reality.com/drone/flora2.png

LUNAR LANDER MODE

This is something the starmap can set up, providing your critter with a 'ballistic engine' (in this case, an antigravity device that gives up upwards acceleration) so you can have a luna lander style minigame to get from orbit down to the surface of the planet (and back up).

STARMAP CHANGES

I've started several StarMaps I intended to include with this release, but being a bugfix release now, they will be postponed. The whole idea, however, is for the storyline to be entirely up to the StarMap, which should then set all 'game rules' for any game-related activit, and then the 'engine' just implements all the game mechanics on behalf of the StarMap (which can then be 'very simple' to author, in theory). A starmap is just a text file, so it doesn't need any special tools. Plus all 'art assets' (such as they are) are created in-game with built-in simple editors. (which you will not confuse with Blender)

The goal is for a StarMap to have the flexibility to make games of different styles (FPS, RPG, RTS, etc)

But here are some changes anyway

* Added a PILOT'S LOG to the StarMap options (tracks various imporant achievements on the map)

* Added API for setting 'tweak values' (like the random seed value that creates the terrain), so a starmap can dynamically change the world via a zillion little numbers.

* Added API support for a StarMap to create 'clearings' (areas where the fractal terrain is flattened out, making it more suitable for building a base or something)

* Added API support for 'dropping new things in world' (and of the things defined by the starmap), but mainly for trees, buildings, etc. Set decorations for story-delivering scenes.

* Added API support for RUNWAYS and other primitive objects (cube, sphere, dome, cylinder, etc)

* Started development of 'growable solids' that can start as a triangle on the gound, which then grows over time into something tree-like

Early video of this: https://youtu.be/0IGmnofWnrI

INVENTORY/CRAFTING

I am working on an implementation for this, so a starmap can declare some number of 'things' (think maybe 256 max) which can be used on that starmap. I would then have an inventory screen showing how many you owned, and crafting panel that let you make new ones out of other things, assuming you had the recipe and the equipment.

This is incomplete in 1.09 and I hope I have hidden it from you, but it might peek at you now and then, so don't be worried!

CONCLUSION

Again, 1.09 has been in development for over a year, so there have been plenty more changes than those above, and you are invited to watch all the films and read all the posts if you want to experience the whole thing, but bottom line, the idea is to empower starmap authors to make fun adventures to share with other players.

But now that will be in v1.10, and this one will probably just have a few confusing corners until then.

[ 03-01-2021, 06:41 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
1.08 is now available on GPS and in the Amazon App store, so future work will be part of release 1.09

Search for "synSpace: Drone Runners"

Thank you for your feedback!

[ 03-19-2019, 12:00 AM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I think 1.09 might finally see the introduction of tractor beams. I know I keep saying that, so I'll try to only document actual progress.

I decided I wanted to add an optional 'turret' to the ship(s). Since it will just be a small number of lines, I think it's an electromagnetic turret and not a mechanical one. I'm thinking it has sort of a wide open, short, V shape when not locked on target, which then sharpens to a tight line as the target comes in range and the turret spins (automatically) to point straight at the target.

But I started with the basic access points. In an update() routine, I do all the fancy math to work out the projectino of any 3D stuff, and then in the render() routine, I turn that into various scribbles on screen.

I am declaring for now that the turret is only used with BEAM weapons. When you fire such a weapon, it lasts for some period of time (or until you run out of charge, which it depletes while active)

The full state transitions are:

idle (no target selected, no turret rendered)

targetted (rendered in wide angle that doesn't sense target until it is close enough)

focussed (aimed accurately, this is when the weapon actually fires.

travelling. a beam heads out from the tip of the turret, to the center of the target. It travels at some speed, possibly instantaneous for some weapons

engaging. the beam has reached the target, but now needs to hook up to take effect. If this were a magic spell (WoS: Rune Runners take note), this is where it might 'fail' (not get engaged)

engaged. (I need better names). The beam haas reached the target and is engaged. At this point whatever the effect of the weapon is, is happening. A LASER might just be draining energy (a 'health laser' might be recharging energy. a 'vampire laser' might pull energy from target and give it (with scale factor) to you. Or it might apply a force to you (tractor), or cause your engines to shut down, or lose strength. Or some combinatino of the above.

While 'engaged' there are rules which would allow the target to escape. In tractor mode, there is a spring simulation between the ships (not sure if I will cheat the masses, or just count the total ship pods as your mass). Unlike gravity, a spring pulls harder the more you displace/compress it. And can become uncompressible at extremes.

So a tractor beam weapon might

* disable/diminish your engines
* pull you towards me via spring force
* until you get to my 'shortest spring length' where you are effectively held in position

but if your engines still work a little, you could struggle against the spring.

I wonder if I need an 'anchor' so I can tractor someone heavier than me (otherwise, I am mostly pulled towards them, but maybe that's a cool map concept where you use tractor beams like some games use grappling hooks.

ANYWAY, I have update doing the 3D math, and I have render just drawing a line between ships. wewt. Gotta start somewhere. Any additional rendering will just be '2D' stuff near the mapped 3D points.

The nature of beam weapons is you only have one shot active at a time. (I think rapid fire would just abort the shot in progress), so next I have to wire in the 'bullet' state machine. Beam weapons are not bullets.

synSpace goal: fewer items, but all items functionally different from one another. Basic differences. Then yes, also variety within a class at the control of starmap developers, but a finite number of interesting 'properties' which can be mixed and matched as needed.

Right now, I don't see it being inexpensive to sto p beams from going right through barriers. I know I should, and they should be able to melt destructible barriers, given time. But I can't do that yet either.

Oh, I had to plumb the SHIP packet to include the currently selected target for each ship. So now everyone is told when you target somebody, and in theory I could have, say, a sound effect like "Someone has just targetted you" and "someone who has you targeted, just launched a homing missile at you" and "you have 3 seconds before impact"

Anyway, so now I can toggle my target choice on and off, and see a beam instantly appear between us. whoopee. Next comes the state machine, then the effects.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I deviated back into the chording arpeggiator last night and fixed the chord names for the I-VII chords, so that each of the letters CDEFGAB are featured in exactly one of the chords. (creative use of sharps and flats).

It doesn't make any difference, and most notes have more than one name (D#, E, Fb, for example, are all the same note, so I could use any of them), so now I use the one with the 'most attractive' first letter.

I laid out the beam weapon state machine, and then worked myself into a tizzy over 'triggered vs powerup bar' and if there would be an associated bullet object or not.

Also, it occured to me this could also make a 'space anchor' (a spring holding your ship to a fixed point in space), so you could sort of park near a star. Gravity would pull you in, stretching the spring and you would eventually stabilize. Maybe needed for a quest where you have to stay close to the star for some period of time.. maybe taking test measurements, or preventing a collapse. Anyway, now I want a space anchor as a sort of standard feature.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Oh, and I had the idea, also for WoS:Rune Runners that the packet I send when launching a major weapon (i.e. a 'spell') should include some random bits (which are then distributed along with the packet, so everyone can simulate the weapon fire, including any random elements.)

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
oddly satisfying to make an 8 part state machine that just draws a line between two points :-)

I think my tractor beam use case is:

* you approach target
* you fire beam and it latches on
* you fly around, dragging target
* you 'drop' your end of the beam
* target is now 'anchored' to point in space
* you come back later and 'pick up' the beam
* you drag around.

etc

That's not what EVERY beam weapon should do, but it's sort of the most complicated thing I would like to support. This way you can go full Gulliver's travels on strapping things down (no doubt some boss NPC will need this)

Now maybe picking it back up is a stretch item. But UI-wise, I'd like to at least pencil in the 'how'. Maybe the trigger is required (must be selected trigger weapon) and then the trigger is more of a toggle to turn it on and off, rather than 'fire' it. And turning it off could, for some weapons (anchor) release it from my ship and glue it to where I was at the moment I released it.

And basically, once dropped, it would turn into a powerup, as far as getting picked back would be concerned.

Well, I dunno.... gumption fading... have to address the core issue of 'can a beam live on its own, without a ship as its data holder...' I'm open to each ship having its own private beam data for one beam. But not N. But the gulliver case wants to be able to add multiple tethers, each of which lives on its own. I guess that votes for an associated bullet object, just to hold all the instance data.

I dunno. I think I might just special case the 'personal space anchor' item, and then it can run at the same time as a singleton beam weapon.

Also, then I could 'grapple hook' my way past several stars alternating between space anchor (not fall in) and tractor (grab next handhold). Where the handholds would be asteroids or something.

but right now I have a line! It changes COLORs!

----

I also got sucked back into the vocoder, of course. On the subject of better note start timing. Generally speaking, I announce a note start once I am positive I hear its 'pitch'. That process can be slow and take several spectrums to make up its mind, at which point I guess how far in the past it really started. And while my individual guesses are not far off, collectively they don't land together on the same beat (when they should), so it adds.. I dunno. a honky tonk sound?

ANYWAY, I think I talked about this before, but what I did last night was to track amplitude pulses separately (outside the pitch stuff, no filters involved, just first derivative of average energy graph)

So, I maintain a little buffer of 'timestamps at which I recently heard a volume increase'

then up in the pitch detection logic, once it decides to emit a note, it does my original estimate of nore start, and then checks that against this lisrt of recent loudness pulses, and adjusts to the best match.

We'll see.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I think my little rhythm 'hack' (actually using what I hear, I mean :-) might have worked. Just got my best ever vocoding of Escape From New York (a particularly troublesome rhythm for synSpace)

As I evolve musically, I find I am less worried about detecting note 'endings'. Feeling that 'music' is mainly about the note START time, and if the note lasts longer or shorter than expected, well, that's just normal physics and doesn't ruin the music. But late or jagged STARTs do.

Ergo I noticed a high number of long notes being turned into a short note followed by a slight delay and then a long note filling up the real note time.

This turned out to be due to a note STOP optimization where I react to a 'sudden decrease in volume, but not all the way to silence' as part of my ability to hear rapid retriggered notes.

But for that, I really just need the START velocity (to start a new note of the same pitch even though the previous one is still being heard), so I removed the optimization and got rid of at least half of the extra notes.

Which means I will err on the side of long slow notes, better for chord progressions, maybe. And I'll never be great at really fast stuff anyway, right?

---

My phone died (Nexus 6P) and my replacement arrived (Pixel 3XL) and I activated it today.. not hard at all. Had to move a sim card but the rest was automatic, just like you'd expect it to be (but never had been before for me).. Instructions were minimal, but adequate. I can now get the old phone repaired (maybe) and have another "6 inch tablet'

So, here is the obligatory "What's Wrong with the Pixel 3XL"

I want my headphone jack back. The simplicity and inexpensive nature of wired earbuds is not outweighed by wireless (one more thing to charge/lose/get eaten by the cat). That said, I used it today at the gym with the (included) usb to 3.5mm jack adapter and it worked fine, although the cord comes out 'the other end' of the phone, so you have to put it in your pocket 'upside down' (and hence the camera no longer peeks out over the top of your shirt pocket, which I always thought could lead to a cool AR experience.

* it's still updating itself, of course, so I accept oddities for a bit. The new UI is a little irritating, and presents new problems for 'how to turn off pandora', but ultimately the task switcher started working again (would not 'delete' tasks). The list of options is a mess, requiring a search widget (assuming you remember the name of a feature), and I still need to find all the assistant/notification features I need to turn off. It's to be there for ME, not me for IT.

* The touch interface is messed up. Or at least I hope it is. It can only detect 2 fingers (I've never encountered fewer than five before). Very piano unfriendly. Also, it just seems to miss touches, in general, and the UI sticks a primary element in the middle of my piano.

* And it's got the long black screen startup (at least of synSpace), which I associate with loading sound files.

I tried one brief phone call (to my answering machine) and it worked, though the TAM hung up on me like it didn't hear me, so I must not have been very loud..

I bought a little adapter ahead of time that splits the USB-C of the phone into a USB-C to a charger, and a 3.5mm to headphone (which requires actual DAC h/w in the adapter, not just wires). It only says 'to a charger', but I am hoping it will be full USB to the computer as well. Knock on wood.

But now I get to make a little film about connecting MIDI controller to synSpace!

[ 03-25-2019, 10:50 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
heh, it worked, but the USB-to-3.5mm converter logic results in the earbuds showing up as a full one USB audio device, which means "can't tell the difference from a MIDI controller", so when you plug the earbuds in, a permission dialog appears asking if you want to allow the earbuds to connect. (I mean, synSpace causes that when it sees what it thinks might be a midi controller, but in this case is incorrect)

I had ordered Yet Another MIDI Controller from sweetwater last week and it finally arrived. Another one of those "really do not need this, but it is cute and small and comes in an easily-stored box" Plus it was half off and comes with a handful of otherwise not completely free DAW software, so I couldn't resist.

So, I plugged it into synSpace and.. didn't work! All notes are stuck. Hit any note, it plays forever. Worst ever.

Looked in the logs for the MIDI messages and sure enough, this m-audio controller DOESN'T SEND STOP MESSAGES.

Instead it sends start (well ON) messages, with the velocity set to 0). This is an optimization which saves bandwidth when the only messages that need to be sent are on/off and you support chaining where you send additional argument pairs but do NOT include a status byte for every message (just assumes previous status byte is still in force). Since Usb-Midi doesn't actually DO that (all messages are exactly 4 bytes and include their status field -- sysex exempted), I thought maybe my midi state machine could skip it, but nope. Gotta handle everything, I guess :-)

Anyway, easy fix and WASN'T IT A GOOD IDEA I GOT THIS CONTROLLER I DIDN'T NEED? Remember that the next time I want something for no good reason. A lesser mortal would then start buying all sorts of expensive things just to test code against, and then return them to the store. That would be wrong. Luckily I have pretty much never returned anything in my life, so it's not hard for me to continue to not do that.

But, do I need compatibility with the Moog One? ($8K list, I believe). Or a Tesla Model Y... do I need to test compatibility with THAT?

[ 03-28-2019, 07:58 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
cool, my new favorite android canvas method:

drawBitmapMesh()

It's been there since forever. Probably super expensive computationally.

It lets you map a grid onto your bitmap, and then select the on-screen xy location for every point on your grid (and then render the bitmap, distorted as needed)

I used this to make a renderQuad() method that works with the synSpace 'grid', so I can provide 3D world coords and it will map the bitmap as needed (for example, to be an image 'in the plane of the galaxy', no matter the camera angle)

This COULD be used to have bitmap-based ship designs (or decals applied), but since it is likely expensive, I am (I think) going to start with just special effects. I.e. form a 'ribbon' rectangle (in galaxy coords) from the shooter to the shootee and then render bitmaps stretched and rotated to match the 2D 'trapezoid' shadow. And the image would be some sort of exciting particle render.. something to provide a variety of interesting 'beam' effects.

ANYWAY, my description aside, it works great. I am now flying the planet mars instead of my normal shell...

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
oops, I forgot. My new phone.. USB headphones.. appears to synSpace as maybe being a MIDI controller, so sSDR pops up a dialog every launch to ask for permission...

now fixed in 1.09

[ 04-11-2019, 04:35 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
synSpace remains my proving ground for WoS:RR (WoS 3D 'lite' as it were), One of the analogues is 'terrain', which in synSpace is basically 'barriers' and 'units' (NPC = unit)

in the WoS:RR demo (available for android), the 3d terrain is fractally generated from a set of control points which I define in 'ascii art' inside the script file itself (though WoS:RR doesn't have the cool lua script engine from synSpace yet, and I will end up reinventing some wheels when I port that)

Ultimately I need the game engine to provide editing primitives for its 'terrain document', and that bit varies with the game (synSpace vs WoS)

And then have script support for controlling that document over the span of the session. And that part should be 'the same' for synSpace and WoS... just different properties, but philosophically the same. Mostly, the terrain is pre-baked and static to the map.

I have started a new starMap, based on my first DOTA map, and in this one I am looking to further simplify map creation, and provide 'jobs' for people who enjoy different parts of the experience.

First I want to increase the max number of units, and worry about performance when I need to (level of detail sort of culling). I think Empyrion worked with about 1000 units total. (256 each for 4 players), and I'd like to be able to do an Empyrion-like experience as a starmap.

Then I was going to have the map support no more than 64 'unit types' (they could be unique to this map, or copied and pasted from other maps), but I would call them unit types 0-63 .. ok, 1-63 and 0 can mean 'not in use'

Then, with ascii art like

code:
00000000000...
00+-A--B-+000..
00|

You would basically draw a two-d top-down picture of the starmap, and each character (one of 64) would identify the sort of thing in each cell. Mainly nothing, and barriers, but also gates, home
bases, healing pads, and various RTS factories you
have built over the course of the game.

Units are defined as lua tables/objects and can be extended for flavor and uniqueness

I guess your 'base' would not be affected when you lost your ship, just your ship powerups.

You would be encouraged to race your new ship to the center to feast on powerups (new factories, resources), and build defensive and offensive units which you would command to either hold around a point, or hold around a point AND SHOOT AT IT.

Maybe you have to manually walk them 'through the maze'

ANYWAY, getting slowly to the point, I think there is a stage in the middle where I effectively do exactly this (ascii art with units mapKey ), but then I thought, can't I make it a LITTLE more graphical? Mosquito cracked wide the creation of starmaps with his editor (Thanks again, M!)

So, this map is going to offer 'map initialization through FACE asset'

* you draw a FACE
* you use N special colors (ROYGBIV and shades of gray), meaning
- vacuum,
- barrier (most common sort(s))
- powerups
- star
- gate
- base
- flag

In addition to the standard colors, you can use any of the 64 colors (available to FACE assets) and bind those to any of your unit types.

OK, so I drew a couple maps, and once you get over 'gee, only 16 bits across is really small, and yet, if you accept that the REAL lines will be MUCH THINNER so it is ok for it to look jam-packed at this stage, you realize you can do quite an OK RTS map with only 256 cells.

But you can't help but automatically scream "please, at least 64x64 cells! please! please!"

I hear the screams

I will get pushed their eventually as WoS needs higher res bitmaps here and there and I need a way to do that with as little bandwidth as possible. I'm thinking maybe a server/db that acts as a copy and paste between players for larger files (still small, but over a thousand characters), with some amount of latency. Maybe an optional 'account' sort of situation for greater permanence. I dunno. In general, I don't want accounts. But when the time comes that your map needs jpegs and mp3s, they should come from a more normal asset serving path.

But right now, I have a LOVELY system for sharing postage stamp (16x16 x64 color) bitmaps, so I want to see if its fun to make maps with them.

But, I was thinking, all the maps I make tend to replicate the four corners so 75% of my tiny postage stamp is redundant. So first I came up with a name: FLOORPLAN

a FLOORPLAN is one of these little face bitmaps. I can generate actual map objects (barriers and units) from one or more of these floorplans.

I can 'rotate' them for use in the corners.

And my thought then was that the original designer (me) would provide a floorplan baked-in, but then after the fact the goal would be to let players draw different maps, as FACE icons, using the colors as defined on that particular starmap.

And then, it occurred to me that maybe one form of play would let each player provide their own FACE icon for their own corner of the map (and maybe there is also an optional center floorplan controlled by the moderator)

Anyway, in the asymmetric case, when composing your 'base face', you would probably be limited as to the number of pixels of particular colors (no more than this much barrier, or this many kill-cannons)

But I like the thought of pulling up to a battle in progress, grabbing an unused map corner and dropping your favorite base face on it. Would probably need to add something to advertise your, well, now it will be called base face, I guess.

Anyway, if the 16x16 only describes one corner (with four possible rotations), then it's AS IF you had a 32x32 bitmap for the whole map.

Well, I thought that sounded cool, so I started my new star map "Autonomous Jefferson" He was born a machine, but now he's a patriot.

Hmm... Patribot.
...Botriot... botriot

(so b looks like upside down p)

oh, it's my birthday so I finally get to open my shiny new Yamaha Reface DX! I shall make a long boring unboxing video!

[ 04-22-2019, 06:36 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Recent Changes:

* added a Pilot's Log, on the Options/StarMap page.

This logs people-centric events (like other pilots entering and leaving your star system). It's empty at the start of every play session.

* TRACK Editor can now be dragged left of left

Now when you drag on the 'ruler', it doesn't jam up against the left edge, leaving you unsure if you really are seeing the earliest events in the track (you are, and always were, but now you can peek to be sure anyway)

* new OPTION: start new groove recording PAUSED

This worked out pretty nicely. When you turn this option on (options/nerd tab), each time you start recording (first track of a new groove), it enters record mode, but in the paused state. The first note it sees you play (on keyboard or through vocoder), then unpauses the recording. So your first note always starts at the beginning of 'measure 1'

subsequent track recordings start un-paused (since they are playing back a previous track at the same time). It's still probably best to start your recording with a few quarter notes leading up to the start of work (and then delete them later)

* New FM percussion

I really need to ship a basket of instruments with the game itself, and I apologize for not already having done that. Thanks to the FM synth, I now have lots of sounds I enjoy and am not too embarassed to share. But I recently made an FM snare drum and kick drum, which I like. Mainly I like them because they run through the real synthesizer (and are not just canned samples).

Of course, the canned samples are 'better', so I won't lose them. But there are jitter issues with playing them (using a different audio playback system than the synthesizer), and they don't go through the filter, the reverb, etc.

With these fm-based percussion instruments, I get a very reliable and precise beat. And I just feel better about myself. Here is my first drum track loop:

code:
TIME:   1 + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 
KICK: x x x x x x x x x x
SNARE: x x x x x x

turn those loops on and you can't help but start tapping out a happy little melody. Just jamming one simple track on top of another. (still need to add basic track merging, and probably more tracks)

But this is a toy DAW. It is just for fun. Do not base your music career on this DAW.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Just Added: Filter Energy Visualization in background of TRACK DETAIL editor (at max zoom).

This is one of those things you can't believe you made it this far without having. Basically it's the same data as in the scrolling vocoder 'formant' display, but stretched large and aligned with the notes in the groove.

It allows you to 'see' the otherwise invisible note cloud and it tends to be glaringly obvious where synSpace makes its mistakes.

visualizations are always enlightening :-)

For this to work, you have to have a fresh vocoding (don't delete the spectral samples) with no big silences at the front (silence messes up the alignment at the moment).

Even if you felt the need to hand edit your vocoding, this energy background can probably help you out. This is also the data one should feed into one's neural network...

anyway, it's pretty fun and I made a little video which will eventually appear somewhere.

The biggest remaining problem seems to be "indecisive harmonic removal" where over the life of a note (a second or two in this case), I vacillate in my decision as to whether it is a note or 'just a harmonic of a note', and that results in my perceiving multiple notes within that period (instead of just one, or none). Leading to the 'staccato-ization' of long notes.

[ 05-05-2019, 04:43 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
synSpace v1.07 release notes video

https://youtu.be/Z-e2gfvKmp4

[ 05-06-2019, 01:07 AM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Here are some screen shots of the 'show music energy as background to highest zoom track editor view' stuff. (New to version 1.09)

Here is a composite image of the original sound and the vocoder's attempt to turn it into notes (green bars)

 -

Here is a different section of the song (Sounds of Silence) with just the naked music energy

 -

And here is the same thing, with notes on top

 -

Here is another section, just the energy

 -

And, finally, that section with notes

 -

You can see where the singer's vibrato causes the energy to oscillate between three notes, or close enough to cause the vocoder to 'lose track' and end the note early (only to start a new note when it comes around again, if it isn't too brief of a visit). I need to detect 'peak wobble' and just stick to the center.

Note how at each section of the song, the alignment between the energy and the notes is a little different. Things conspire against me to keep them completely locked, so you have to treat the layers as 'relative' more than 'linked', so, for example, I am probably NOTE starting a note before any energy appears :-) But I MIGHT schedule a note 'further in the past' than needed. and this tool is expressly to help resolve that, so I really NEED a good sync... but, for now, I have to take the sync with a grain of salt, but by looking at the energy, I think I get what I need.

[ 05-11-2019, 10:41 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
version 1.08 release notes

https://youtu.be/bCmWDsS0P2U

Whew, almost caught up.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
It's fun working with the new visualizer and I think it's already paid some dividends in getting better note starts. Much less rhythm distortion now. And it looks like it might be able to keep up with quarter notes at 240 BPM, though that still turns out to be too slow for 'despacito'

But moonlight sonata is encoding beautifully now. Well, to me :-)

---

I also started another mini-AI project, still not using neural networks. Instead, I am using VQ (Vector Quantization), or at least my version of that, based loosely on what I felt I read in Ray Kurzweil's book "How to make a mind" (or something like that -- great book, I should re-read it)

---

I was thinking about my core mission (which seems to be "player-created critters with AI and scripted elements that can be dropped into player-made virtual worlds to achieve fun in various ways")

What I have learned from the current trends in AI (as published in mainstream news and outube) is to pick small tasks (like 'a single output that only controls the steering wheel') and then use whatever AI paradigm you like to come up with the best value for that, given current and recent experiences (sensory inputs)

So, I am using VQ as a little database of the M most unique sensory input combinations that I have experienced. And then have a value assigned to each combination.

Training results in the best 'example memories' getting stored in the VQ. Then when I encounter a novel sensory input, I find the VQ entry that feels the most similar (or several) and then use (or average) its value.

So basically I am trying to have a VERY SHORT LIST of examples, from which to base 'what to do' based on 'what I've seen'

And I expect the brains to be 'choppy' as opposed to 'continuous' so there should be lots of good 'lurching' rather than being invincible killing machines.

I figure each critter (or each swarm of critters) gets a brain (class CritterBrain), and each CritterBrain contains several VQs, where each VQ answer a specific question, which are things like:

* which is the best enemy for me to attack
* which weapon should I plan to use on that enemy?
(a weapon then provides some constraints like 'you ought to be at XYZ relative to target, with heading H, before firing)
* What direction should I turn to get to attack position
* what speed should I go, to reach attack position

The game engine knows how to play along, and notes when the position is reached, and the weapon is fired either automatically or after a script notification. Not everything happens in the critterBrain, which basically returns 'what they critter would do, if it could do anything' and then the game engine has to apply 'what's legal' sort of thinking on top of that, and pick from several suggestions (which could use another VQ of course, but the goal is not to go VQ crazy. Use as few VQs as needed, with as few memories as needed, and each sensory input n-tuple should be optimized for its particular VQ and use the barest amount of input

I strongly suspect that increasing values of N (the n-tuple) demand vastly larger M (max memory entries in VQ)

ANYWAY, I want CritterBrain to be reusable, so only the host organism should know the details, and all numbers passed to CritterBrain need to be normalized (-1 to +1 maybe) and generally are 'relative' values (e.g. How far to the neearest obstacle 'on my left') where 0 means 'in contact with' and 1 means 'so far away you are no longer aware of its existence' (and the host decides what that means in Meters)

And just to share, here's the initial blast of code and comments. It talks about another game I have been working on (sort of) for years, "Farmy"

This is completely untested and incomplete. I typed it in about an hour, with great enthusiasm and happiness (the fun part of any project). I avoided premature optimization, and it still has the lovely simple structure of a program that hasn't been asked to do anything 'real' yet (other than compile, and not crash things due to its pure existence)

I include the Farmy comments which are probably enough, but let me pre-cap with a summary of my "Farmy Vision". Farmy Is:

* an android game with custom server
* broadly speaking, a farm simulation
* focused mainly on breeding/training 'critters'
* with the usual vaporware promise of evolution
* you then form critters into 'armies' (get it? Farmy?)

You pick units you have trained, and put them in an attack formation. Actually, I should make that two separate formations. Offense and Defense. And Defense should ideally include some set props (your 'base') that can help in your defense (defense gets a little bias)

There is a game ladder, and every FARMY/FARM/TEAM? is on a rung of that ladder. You might be managing several teams on your farm (I think you just get one farm... or not. Not sure) But you definitely can see your ranking on the ladder.

You can challenge someone above you on the ladder (up to 3 rungs?) and if you win, you take their spot, and the people between your old and new position all get pushed down one spot. If you lose, you drop (n rungs, penalizing you for challenging above your level) and the people in between your old and new position, get pushed up one rung.

And maybe its a chain of ladders for newbies leading to oldbies, but my 'demo' app just uses a single ladder

The multiplayer aspect is sort of the point of my doing Farmy, in that it is to give the illusion of being a multiplayer game, but really only the attacking player is watching it in real time (probably, the defending player is not running the app at the time of the battle).

The attacker gets what it needs from the server (which is basically just a database of team data and the current ladder ordering) so as to carry out the complete battle, and then reports the result to the server.

Ideally, any other copy of the game could fetch those results from the server, along with the team info at the time of the battle, and recreate the same battle (instant replay style), so hacked clients could be detected after the fact.

Anyway, five? years ago I was zooming through development of Farmy, but got stuck on the art. I *wanted* my 3D skeletally-animated skinned and morphed critters, casting amazing particle effects upon one another... but that remains to manifest.. so I thought '2D sprites, but using OpenGLES instead of canvas.

And I got the basic shell of the game working (fake server), but not enough to be fun or even interesting, and I hadn't quite started to do the genetics stuff, which at the time was going to be from you growing 'special food' that would then alter an existing animal, and then lamarckian genetics would pass that increased (whatever... strength, say) on to the children with some satisfying amount of mutation and discovery of new abilities.

You would occasionally breed some sort of super soldier pig which you would assign to your offensive or defensive teams, and place in formation.

Then you would have some training pens where maybe the offense and defense teams would play against each other (hey, they actually DO that in sports, don't they?)

I guess it would mainly keep 'stats' to help you decide which critters to promote (and maybe a commerce mechanism to (humanely) sell the excess critters, or at least store them somewhere out of the way). Of course you should be able to name them. And ideally, they could remember the id token for any battle they have taken part in, and you could reexperience any of those battles, as well, as looking at training pen data.

But I plan to have a simple visualization for each VQ showing circles for each 'unique combo of sensory input' and it excites me to think to offer 'live brain surgery' where you drag those circles around in real time and observe the effect on the critter's behavior. :-)

Doesn't that sound cool? Probably just end up being a random hash, but it SOUNDS promising.

As to the battle style. Since I spend 90% of my personal game playing time with "Tower Raiders Gold (1 or 2, but not 3)", and that's because of my highly nostalgic attachment to the original Command & Conquer game from 1995, one of my favorite years.

Anyway, so maybe it should be like that. I haven't DONE 'Empyrion for Android' yet, and maybe this could scratch that obligation itch. So that means:

* a map, with obstacles and probably resources
* units with special abilities (rock/paper/scissors)
* Real Time Strategy, units move continously and attack when they are in a suitable position

So, it could be 2D or 3D, but the 2D would try to look 3D (multi-cell filmstrips for each unit, which can then be rotated a little to conform motion to uneven terrain). I mean it would need to LOOK like 'a command and conquer unit-based game' but could be IMPLEMENTED in 2D or 3D. Camera would be restricted in the 2D case, to the single angle for which all art is rendered ahead of time.

Each unit would ideally have an 'animation' of several activities:

* moving (at various speeds and directions)
* standing (ideally with 'breathing')
* attacking (primary and secondary)
* defending (primary and secondary)
* losing or winning (falling down or jumping up)

shades of WoS.

Here's where skeletal animation comes in. I can completely see myself as being able to creat skeletal 'poses' of any shape to great merriment. I can also see linking poses in order (with auto-tweening) to form stick figure animations.

But I can't see myself drawing N frames of animation for M actions, times a million critters.

I can see myself making a crude image that becomes the 'skin' of the critter (it's fur markings, as it were. it's pigmentation, seen when shaved or bald).

And while I want nothing more than to make a cool thumb-driven 3D modelling program, it's still pretty insane to not just use Blender instead, especially if you want actually good looking critters.

But that's really for WoS:Rune Runners, not Farmy. Farmy, I now decide, should be C&C ish. So, can I have 2D sprites AND a skeleton/pose/animation studio?

Imagine the creator of a new critter does have to provide a 'skin' image.. with, say, 'fixed uv mapping' (this part of the image is the upper arm, for example)

If I made a stick figure skeleton/pose/animation studio (whose output could feed WoS:RR as well as Farmy -- and farmy could be the excuse to make a database server for storage of player created assets....) Oh right, that stick figure studio would not do its own rendering! So I *could* do this inside of synSpace, where the collectible architecture already exists, and who would also enjoy having a database server for long term storage.

Yes, I am liking this. Add this to my list of things to never get around to doing!

ANYWAY, so maybe Farmy would be a 2D game using OpenGLES sprites as it already is, but I need some render magic to paste the skin onto the skeleton for a given pose and camera angle.

the pose and camera angle gets me the mapped 2D positions of all the bone-joints.

Assuming this will be rendered fairly small, so high rez is not super important, maybe each bone could be rendered as a sequence of overlapping circles, whose radii vary with some meta data about the bone (derived from the skin art perhaps)

so from a DISTANCE you see a bicep muscle, when really it is just a shoulder circle, bicep circle, elbow circle, lowerArm circle, wrist circle, and hand circle.

each of those rendered at an exactly mapped 3D location and radius, possible with some appropriate shading, and perhaps with the Z info known well enough to render in inverse order of distance.

Maybe the stick figure studio includes a 'place circle or square here' technology. Not an actual skin per se, but the basic measurements of the critter at that spot. Then if that could be automatically UV mapped to a texture square...

Well, that's not going to LOOK like C&C, but maybe I don't mean that quite so literally. Just that it looks like '3d ant units wandering over terrain that is more interesting than just flat, but not THAT interesting.

Well THIS got a lot more dawdling than I thought it would. Dawdling. perfect word.

Anyway, here is that completely un-run CritterBrain class start. It's evolving so the comments are a bit contradictory where I am indecisive.

code:
/ CritterBrain

// Try not to get too excited.

// A critter might be a 3D animal/monster/NPC in Rune Runners, or a ship/npc/obstacle in Drone Runners
// we'll maybe call the owner of the brain, the 'host'

// the goal is for this class to not really know anything about the game, though I do bake in some game
// parameters I am 'sure' I will always need.

// A CritterBrain is allocated to each critter. It is used to record experiences into a memory blob (a VQ),
// and then to offer action suggestions based on memory and newest experience.

// The CritterBrain itself can contain multiple sub-brains, basically one per 'decision to be made' and
// in this way is similar to how neural networks drive cars (e.g. this network takes in all of reality and then
// puts out a single number, indicating how much to turn the steering wheel from its current position.)

// But I am not using neural networks. At least not at this time. Not at this level.

// The host decides which subbrains it needs, so CritterBrain just lets you allocate subbrains as you
// see fit. Class VQ is the subbrain class. It is my version of a VectorQuantizer, which can turn
// any experience vector into an element of a data structure (a list)

// reach row of the list (inside a VQ) contains a pattern experience n-tuple, and an output value
// if you just want to ask the VQ a question, it is "what is the output value you recommend, if I told
// you that my current sense experience is this n-tuple?" (assume I don't know the right answer yet
// so there is no training from that yet), this just answers a static question, based on the brain's
// current training.

// the list is basically a list of EXAMPLEs for which the brain 'knows the answer'. If you ask
// about an example it hasn't seen, at this level it will say "not sure, but here is a sort of average
// from some nearby experiences)

// in LEARNING mode (in particular, when a new baby brain is being fed its blended parent's brains
// with a little random evolution in it), then training experiences are fed into the brain. Each
// VQ is configured with a max number of examples (or memories) that it can hold. The first N
// experiences fill these slots.

// once all slots are full, new experiences are compared against existing ones (distance in n-space)
// and if one is found that is close enough, the new experience is folded into that one (and the weight
// of that memory is increased. These memories accumulate mass, with repetition)

// when a memory is folded into a nearby one, they are both pulled towards each other, in n-space,
// commensurate with their mass (heavy one moves less). The idea being if some experience clusters
// are sort of 'real', then these will migrate towards their true centers. Otherwise, things will
// just be wandering around in brownian motion.

// if no existing memory is close enough to the new experience, then a new memory is formed, with mass = 1
// exactly at that spot in n-space. But another memory must be removed.

// when looking to remove a memory, we actually are going to just merge it with its closest neighbor,
// so we look for the two closest (and/or lowest mass?) memories we have, and pool those into one.

// For purely fun's sake, I intend to make a visualizer (rectangle on screen somewhere) for the VQ which
// uses the output value (-1 to +1, continuous) for the X axis, and then some 'shadow' of the n-tuple
// for the Y axis (just to keep them from all being on top of each other).

// then draw some sort of colored circle at each memory's intersection in the square. Mainly to
// just watch things in real time. Ideally, the critter itself would be placed in a sensory
// test environment you could manipulate and then watch the memories shift, and see which one(s)
// are being recommended as actions given the current senses.

// Then, and this is what it is all leading up to

// you can perform BRAIN SURGERY on a living critter! If you drop the critter somewhere you know
// it ought to turn left, and you see the glowing circle on the right side of the PET SCAN, you
// just touch it with your finger, and drag it to the left, until the critter starts turning the way
// you think it should!

// not an efficient way to program a brain, probably, but I am thinking of this for my game, FARMY
// where you raise animals, form them into armies, and have them fight the armies from other farms,
// competing for position on a huge game ladder, which itself is like terrain to be crossed.

// Anyway, so FARMY is a farm simulation, and will probably have some crops (to feed animals, and
// I was always thinking that was how I was going to control their DNA), but now I want to focus
// on breeding the animals with actual inheritance of brain (lamarckian!)

// But I also want some post-birth training. Originally I was thinking: start with 2 parents (and
// maybe they get destroyed? hope not, but there is a flow issue) and you get a litter of up to, say, 10
// babies (no babies will die, but you will sell the ones that don't meet your needs). You might
// choose to sell the parents if you don't like the babies. Or remember that these two parents
// always make defective children together (but with other mates might do better.. maybe a log book :-)

// Once born, since the babies have random mutations as well as blended brains, we assume some will
// just be defective (much worse than their parents), so we put them through a series of tests (one
// per VQ) (you watch them as they are tested, and review the results). The tests can be interactive
// with you introducing them to other animals from the farm in various roles (play-fighting, cooperative
// fighting )

// While in these training scenarios (I guess opening the pet scan for editing should probably
// pause the creature), the output values of the memories can be adjusted, based on success/failure
// of the individual test. That could be done as much as you liked, and you would probably leave
// your animals inside these training pens for hours.

// CritterBrain handles the creation of a new brain, given two parent brains, a takesAfter ratio
// and a degreeOfMutation value. The resltant brain is not guaranteed to be sane or smart

// because I like simple. A CritterBrain will be limited to a finite number of VQs, say 10, and
// I will have a satic array of references to allocated VQs (null if not used) and the host will
// assign the index values 0-9 as bets meets their needs (I am thinking cross-species breeding
// here, with some ntuple values only set in some species, or some species have different
// needs than 'turn left'


import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.Rect;

public class CritterBrain
{
private static final String TAG = CritterBrain.class.getSimpleName();

public final static int MaxMemoryVQs = 16; // the most sub brains you could ever have

World mWorld = null;

MemoryVQ [] mVQs = new MemoryVQ[ MaxMemoryVQs ]; // pre-allocate references (but all nulls)

// constructor
CritterBrain( World w ) {
mWorld = w;
}

// add a new VQ with the given max number of memories, and each memory has the given number of facets (ntuple)
// returns a reference to the newly created MemortVQ
// memorySize is 'n' for the experience n-tuple.
MemoryVQ addVQ(int vq, int maxMemories, int memorySize) {
mVQs[ vq ] = new MemoryVQ( maxMemories, memorySize );
return mVQs[ vq ];
}

//-----
// these pass along to a specific MemoryVQ

// include output value if you want to train this memory towards that value
int addMemory(int vq, int[] memory, int output) {
if( mVQs[ vq ] != null ) {
return mVQs[ vq ].addMemory(memory, output);
}
return -1; // failed
}

// no training occurs, just answers based on current memories
double suggestValue( int vq, int[] memory ) {
if( mVQs[ vq ] != null ) {
return mVQs[ vq ].suggestValue( memory );
}
return -1; // failed
}

// return value relates to the editing, I think
// render the current VQ layout, for fun!

int renderVQ(int vq, boolean editOK, Canvas c, Paint paint, Rect r ) {
if( mVQs[ vq ] != null ) {
return mVQs[ vq ].renderVQ(editOK, c, paint, r );
}
return -1; // failed
}

// ------

class MemoryVQ
{
int mMaxMemories = 0;
int mMemorySize = 0;
int [][] mData = null;
double [] mValues = null;

// constructor, make an empty VQ list
MemoryVQ( int maxMemories, int memorySize ) {
mMaxMemories = maxMemories;
mMemorySize = memorySize;
mData = new int[maxMemories][memorySize]; // full of zeros, they promise
mValues = new double[maxMemories];
}

// omit output unless you also want to train the memory
// mainly this is expected to merge the new memory into the existing one (if any match)
// but will add a new one as needed, up to the max allowed
int addMemory( int[] memory, int output ) {
int memIndex = findClosestMemory( memory );
if( memIndex < 0 ) {
memIndex = mergeClosestMemories( memory );
}
if( memIndex > 0) {
// now drive this index towards new memory
double weight = 1; // brand new memory is weight 1
mergeMemoryIntoExistingIndex( memIndex, memory, output, weight );

// consider adjusting the valuie
if( output != 0 ) {
mValues[ memIndex ] = output; // just override it
} else {
double oldValue = mValues[ memIndex ];
if( oldValue == 0 ) {
// never been set, randomize it? (-1 to +1)
mValues[ memIndex ] = (mWorld.RNG.nextDouble() * 2) -1;
}
}
}
// I guess I should return the memory index we folded into
return memIndex;
}

double suggestValue( int[] memory ) {
int index = findClosestMemory( memory );
if( index >= 0 ) {
return mValues[ index ];
}
return 0; // no suggestion? no change?
}

int [] mMem = null; // convenient reference

int findClosestMemory( int[] memory ) {
int i;
int bestIndex = -1;
double bestDistance = 1E29; // goal: a really big number, bigger than any real distance
for( i = 0; i< mMaxMemories; i++ ) {
mMem = mData[ i ]; // mMem[0] is first data element of memory
double distance = distanceBetweenMemories(mMem, memory);
if( distance < bestDistance ) {
bestDistance = distance;
bestIndex = i;
}
}
return bestIndex;
}

double distanceBetweenMemories( int [] mem1, int [] mem2 ) {
double distance = 0;
int i;
for( i=0; i<mMemorySize; i++ ) {
// I assume the host only gives me normalized values so
// the per-column importance is set.
int diff = mem1[i] - mem2[i];
distance += (diff * diff); // this is allowed to be huge, but always positive
}
return distance;
}

// finds two closest memories
// merges them into one
// copies new memory on top of the released one
// and returns its index
int mergeClosestMemories( int [] memory ) {
// find the closest two memories
double bestDistance = 1E29;
int besti = -1;
int bestj = -1;
int i,j;
for(i = 0; i < mMaxMemories; i++ ) {
for( j = i+1; j< mMaxMemories; j++ ) {
double distance = distanceBetweenMemories( mData[i], mData[j] );
if( distance < bestDistance ) {
besti = i;
bestj = j;
}
}
}
// if we found a pair, merge them (so long as I know both masses, I think
// I can do them in either order, but I will 'keep' i and 'lose' j
if( besti >= 0 && bestj >= 0 ) {
// merge j into i
double output = 0; // would this ever be used?
double weight = 1; // real weight to come
mergeMemoryIntoExistingIndex( i, mData[bestj], output, weight );
return bestj; // the one we don't need anymore, should I null it out?
}
return -1; // didn't do it
}

// given the memIndex of an existing memory, and the int[] of a 'new' memory,
// merge the new into the old (moving the old as needed, based on weight)
void mergeMemoryIntoExistingIndex( int memIndex, int[] memory, double output, double weightNew )
{
mMem = mData[ memIndex ]; // destination (only thing that changes)
double weightOld = 1; // existing memIndex weight
double totalWeight = weightNew + weightOld;

int i;
for( i=0; i<mMemorySize; i++ ) {
mMem[ i ] = (int)(((mMem[i] * weightOld) + (memory[i] * weightNew )) / totalWeight);
}
// don't forget output changes? for now, keep the dest value
// and weight changes
}

// render the VQ, and if editOK, let them drag the individual
// memories to new values.
int renderVQ( boolean editOK, Canvas c, Paint paint, Rect r )
{
return -1;
}

} // end of class MemoryVQ




} // end of class CritterBrain



--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
And I guess I should note that in synSpace Music, I was going to test CritterBrain by feeding in the spectral content of each note, and letting the brain track the 'n most unique voices heard' and then use that to assign each note to one of n patches associated with the groove

i.e. at one level it decodes bass, tenor, and alto, say, by absolute pitch range, but also by the presence of octave, third, and fifth harmonics, so two singers in the same octave might still be differentiable if they have a different mix of harmonics (that is also highly repeatable)

maybe. Not that it is a good idea, just that I could try it out. Maybe the BAND panel could have the brain scan in question. Maybe it could double as the stereo-location editor for each 'voice'

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
So, I hacked up a MusicBrain, that creates a CritterBrain with one MemoryVQ that answers the question "which voice played this note?"

The results are -1 (bass), 0 (mid) or 1 (alto) and for the most part just categorize by pitch range. But I include the relative strength of a handful of harmonics in the nTuple (Pitch, Intensity, and relative Intensity of the 2nd, 3rd, 5th, and 7th harmonics)

Still thinking of what 'training' would mean here. For live recordings, I could play each instrument in turn, while touching 'the right answer' on the pet scan. Or after-the fact brain surgery as described above.

I hooked it in with the smallest footprint I could. Basically just initializeMusicBrain(), renderMusicBrain(), and whichVoiceIsThis( midiNoteId )

I call renderMusicBrain only to display the pet scan (and drive any brain surgery editing. It is VQ agnostic)

I call whichVoiceIsthis( midiNoteId ) each time the vocoder has decided it really did hear a note. Just before it sends that note to the sequencer, it asks this question, and then uses the result to pick one of N patches (the 'band' playing the groove in question) for the new note.

Each time it is called, it turns the spectral content of that new note into the 6-tuple needed by that VQ, and adds that tuple to the MemoryVQ, causing existing tuples to merge to make room, or when similar.

renderMusicBrain then just renders a circle for each memory in the VQ (not very many), with, say, the output value (-1, 0, +1, or ?) inside the circle. Sticking out of the circle, like six little hairs, equally spaced around the circle, the six tuple elements are rendered (hair lengths), so in theory you COULD work out what they meant, in this case. I would use the first element (pitch) to place the X of the circle, and then sum the relative intensities to get something for the Y. Then the first few notes you add should sort of 'make sense' left/right and loudness (up/down) wise.

Perhaps that will make brain surgery easier. But surgery would be noticing a ? value (or a value you disagreed with) and then manually 'correcting' it. Maybe going so far as to split the tuple into two tuples, set to the different values for the same pitch, then evolving from there as you get more of one than the other and they drift apart, pulled by repeateing harmonic differences...

Well, first I have to finish the VQ implementation so circles actually start appearing :-)

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I got it largely running last night (no crashes!). Notes get reported to the brain, it remembers what it saw (the N most-different events) and tries to match up new events with known ones, to deliver the final output: bass, mid, high.

For training, it's a little weak. When a new note comes in, I know its pitch, and can make a simple estimate of its 'value' (-1, 0, or +1) based on which octave it is in.

So if that new note ends up becoming a new memory slot in the VQ, that slot will be initially bound to that value.

But if a new note is close enough to an old note, it is not added as a new memory slot, but just 'folded into' the existing one (moving it slightly towards the center of the new note). In that case, the guess is not used and the slot retains its current value (But maybe I should have it drift a bit towards the guess)

The new note is compared against all existing memories, based on the ntuples, basically just like getting the length of a 3D line, only for 6D.

At this point, I am not including the first element of the tuple (pitch) in that distance calculation, as it has too strong of an effect, and I want the primary source of 'similarity' to be 'has the same ratio of harmonics'

But really, the harmonics change as a single voice goes up the scale, so having several memories, at different pitches with the appropriate harmonic settings is better than trying to catch all notes of voice 1 with a single memory (unless the harmonics are super stable and constant from note to note, which I can make happen with my synth, but is generally not true.)

If I just needed to train it to recognize its own synth, that would be great :-) I could just have it play test grooves back into itself and compare the vocoding with the song source which is just sitting there. And probably there is some value in doing that just to vette my overall algorithms. But really I want to categorize elements of live sound, and differentiating between instruments seems a fun thing to do.

I'm pretty sure this won't be completely successful, and perhaps not much different from just using that initial guess as the final answer :-) But I'm hoping the initial guess can be 'stretched' via memories, that allow a slightly lower note to be included in 'high' because earlier 'high' notes had already gotten close to the boundary and pushed it down a little.

But ideally, I will learn to recognize two singers different formant bands and be able to vocode a duet and keep the parts separate. It might be easier to do that with stereo-location, but I wanted to work on this VQ stuff anyway, so I am glad to have an excuse.

---

I guess additional VQs in the MusicBrain could be "what note should I play next" and "when should I play my next note?". That sort of thing could be trained by groove data, though it would assume you would only play it grooves you liked.

The Stephen Foster project.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Just posted a video with the results. First 'music brain'

And it's a reasonably short film for once!

https://www.youtube.com/watch?v=Hjre9sd6Aos

highlights include:

* sound energy shown as background on Track Editor
* new note start/stop detection
* first light 'music brain' to assign each note heard from the vocoder to one of N 'voices'
* new 'BAND' shortcuts for lo/mid/hi voices (default patches to use in new vocodings).
* example vocodings of guitar and piano

[ 06-03-2019, 08:38 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Well, it's 101 degrees and I am back home from (minor) surgery. I think I will just sit in front of a fan today.

As mentioned in another topic, I have started work on my reusable skeletal pose/animation class. My 'critter creator'

Officially, it's for WoS: Rune Runners, but I see how to use it in synspace (stick figures). I don't guarantee to FINISH it, but we'll see how I feel after it makes a pose or two. Might not be as thrilling as I'd hoped.

Right now it is posing as a generic asset editor (you would open this page and then from it, switch to editors for starmaps, critters, shells, and faces (music assets will just live inside the piano for now). This will be the intro to the actual starmap editor as well (the ability to live edit the starmap in memory (and then clone it to save it).

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
so now I am secretly hoping to do a version of Farmy as a Starmap. Presumably starts with you landing on the planet of your victim and then attacking their base (and army) with your army.

Each army would be a fixed number of units (ideally, each player would pick their units and place them in a starting formation)

The attacker would then see their 'base' as being the map they were now walking on (instead of flying a spaceship). Players would select a critter to represent them (creating new critters at will, just like other assets) and the critter would just skeletally animate in response.

This is my Nth skeleton system and I want to be perfect this time in balancing power with complexity. I mainly want a casual user to enjoy making and posing skeletons to achieve interesting animations... probably mostly of mild animated violence (performed by one stick figure upon another).

The units must have a rock/paper/scissors sort of balance, and mostly operate autonomously (any random numbers are synchronized to a seed value so any battle can be replayed later), but mid fight their owner can set their objective (go here, hit that, run away and heal..) within reason. I guess all such commands should be logged. unit X got command C with target T.. a few bytes of data. I'm already seriously considering a 10KB payload in a challenge packet. (full description of challenger's 'farm')

But not in the Starmap version. In the starmap version I just want

* each player is a critter (stick figure + shell)
* now walking instead of gliding
(someday running, swimming, flying..)
* start with some number of extra units you can drop like mines. Once dropped, they become critter units that do their own thing, based on their nature, and the objective you have set for them.

I guess the starmap version is just hardcore RTS to start with. Until I add an SQL module to the mixter server. Which I need to do anyway, and today I think I came up with a nice enough solution which is still inexpensive for me (another $12/month or so).

So N players make bases by 'dropping mines' where you have 8 different kinds of mines, and maybe some speed at which new ones are manufactured (and put on your button bar), influenced by some sort of resource collection. And a cheap unit is a resource collector.

Maybe meteors are resources... and maybe I finish tractor beams (I can dream).

Once dropped, a unit heads towards its best guess of a destination. It has a sense radius and if it sees a likely target in range, and it is in destructive objective, then it will pursue it.

If someone else hurts the unit, or the friend of the unit, it raises some aggro and redirects (possibly only temporarily) (and based upon this critter's VQ brain) to other actions (fighting back, fleeing) before resuming the original mission.

The players then (starmap version) watch this, and drop new units as they become available, over the course of the match. You fly to where you want the unit to be, then drop it. So you fly as normal, and only the units "walk" (and who is to say it has to be walking, it is just a stick figure animation, maybe it is charging its field coil...)

the following was typed early but got moved to this chaotic insert point


The challenger initiates battle (and has a normal real time strategy experience) He places his units on the enemy map (some restrictions as to where, and maybe some barriers available), and then launches his attack.

At the start of the attack, the defending units appear and rush to their formations and start defending, while the attackers fan out from their drop ships or whatever the metaphor is.

The units only have terrain following in the sense that they have VQ brains which try to tell them to turn left or right, based on very little training, inherited from their parents (and probably just canned engrams for the starmap version)

In the case of 'real' farmy, a snapshot of the challenger's complete farm and all critters and their brains, is recorded and stored on the ladder server, along with the rung being attacked. Random seed values are assigned at the time of the attack (and don't have to even be random. People could choose their own lucky seeds)

when a battle ensues it is against the recorded versions of the farms (which might have evolved since their most recent challenge, but other players will never know unless they challenge again. Until then, they could be challenged repeatedly with that same level of development (even though, off line, they are breeding super soldier piglets, and training them to get around barriers. (and giving them stick critter bodies with custom attack animations)) but not in the starmap version.

Technically there should be N bases, but, you know, maybe I will focus on coop play. WHere there is a bot team (run by the moderator) which is defending the base you all see as the starmap. Any human players are presumably fighing the bots, though I guess they could fight on the side of the bots easily enough.

You know, that sounds about right for Drone Runners. You have entered a star group where an evil clone factory is launching an army of units to protect its evil base (which wasn't bothering YOU until YOU invaded... but... anyway...) and you want to destroy the heart of the bot base, while it is defended by lots of units and towers..

And those units are all 'stikcritters' with posable skeletons. You watch them amble through the battle. You nudge them when they are going the wrong direction


Yes, that would be a nice thing to have delivered. Goofy silly thing to do in game, and an excuse for me to do the poser.

I have the little menu bar all up and running, and hopefully this weekend I will get to do the actual skeleton definition (add bone to existing bone, repeat). Traditionally, I get bogged down here worrying forever over whether I need an XYZ offset from parent (joint) to the child joint, or is that just the bone length? And no, you DO need the offset. Not for every bony maybe, but several key bones need to support more than one joint at different offsets. But isn't that the same as just having three separate child bones with their own bones to the parent? It's not like you're rendering the bones (well, in this case, I am, which might argue all the more for that).

But I will just push forth. I have drawn enough pictures in my little book. I have my dream, my vision. Now I need to turn it into a ugly reality.

Now to relearn quaternions.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I guess the takeaway from that is that a Drone Runners lua script should be able to create a power up that can be found, or given.

Once collected, it goes to the button bar, and you stack them as normal.

As long as you have this 'factory' (probably forever), it is manufacturing more units of the same time).

You then fly somewhere and drop them, and they are the only things that can attack the heart of the base.

Maybe the bots totally ignore the human players (and are invulnerable to normal ship weapons), so the actual player ships zip about without incurring much danger, and drop units that take (and give) all the pain.

The ship taking the place of 'the mouse' in a traditional RTS... sounds tedious when put that way.

Anyway, as a co-op map, any human player dropping in could help destroy the bot base. Which would probably be in the very center, with some sort of maze protecting it that your units would have to find their way through.

with an asteroid belt for resource replenishment.

Now, what if their heart, is a star. So your goal is to destroy the star (using suitably expensive units and waves and waves of attacks)

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Okay, just uploaded a video showing the first cut stick figure animator. It's sort of interesting and I hope it looks like fun in itself, outside of its use as a 'object development tool'

https://youtu.be/71wE9MCZMLU

While the typical use of a critter is for 'an animal', it really can be used for anything that benefits from an animated skeleton. That includes people and animals, but also trees, and machines, including weapons (recoil animation, etc)

If I do it right, I could hand a weapon critter to a biped critter, and it would know how to use it, because the weapon would have its own animation of 'being fired' and in that animation it would have nodes that indicate "I must be held in the left hand on this spot, and the right hand must follow this other spot"

So, if you trigger the "bow and arrow" while a biped critter is holding it, the weapon itself plays an animation of the string pulling taut and being released, but it also hijacks the arms of its holder and IK-forces the holders hands to the spots shown in its animation, so without any explicit animation in the biped at all, the hand appears to grab the bow string in the proper spot, draw back, and release.

Likewise, an attack animation should be able to include a target node (inside the animation itself) with the understaning that the animation can only be triggered if the critter has a selected target and if that is true, the critter will first move itself until the target is a certain distance and angle away, then start the actual animation of the weapon action.

Maybe. all done with characters under a cm in height on screen, so lots of action in very few pixels.

I guess I will update when the url is known.

[ 07-10-2019, 07:06 AM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
Hesacon
Obsessive Member
Member # 3724

Member Rated:
4
Icon 1 posted      Profile for Hesacon   Author's Homepage   Email Hesacon   Send New Private Message       Edit/Delete Post 
Any chance of Python support for scripts?

[seems unlikely since I already have Lua working :-) I think Lua might actually be better for this particular thing. But, again, my opinion is colored because I already have a kickass multithreaded lua working in Java.

I do use Python for the new server though, and who knows it that leads to anything user-extensible. Ideally, I would love to offer some python source that people could run to host a server, but first I would need a bit more security against abuse.

-S ]

[ 07-20-2019, 03:41 PM: Message edited by: samsyn ]

--------------------
SoV: Exalted Devout Oracle | World Developer | The Black Guard
Outside is just a prank older kids tell younger kids at Internet Camp

Posts: 9525 | From: NY | Registered: Apr 2003  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I've got stick figure animator up to the level of multi-keyframe animations. Plus I added the first cut inner physics system.

Basically that's just some ground-contact testing (I maintain a rectangle whose limits are set by where your foot nodes are -- if they are on the ground, they contribute to the shape of the region.)

Then I track the center of mass of all the joints and project the shadow of that onto the ground contact region.

If the shadow is outside the ground contact, I note that "I am falling and something should be done"

Next is to enable IK and then do something about it (either plan a foot motion to improve the ground contact region, or reposition the upper body to move the center of mass.

I don't plan to support full rag doll, but I might have a basic state machine that can trigger appropriate animations to fall down, or teeter for a bit first.

But I know the visualizations of the CoM and ground contact stuff have helped me appreciate when my animation sucks (keeping the whole thing 'in balance' makes a difference to how you perceive the reality of the pose. And keeping the feet in solid ground contact, moving exactly at ground speed, is also required to avoid ice skating and other distracting issues.

I think my new sense of a 'walk cycle' is that feet basically have to match the ground speed while in contact. Which means if you are running super fast, the actual time spent in ground contact gets shorter and shorter, until the foot is basicaly just tapping the ground (and pushing off to go ballistic into the next step, which is itself a very short contact with the ground.)

So I sort of only NEEEED one pose (max stride with both feet in contact -- back foot with just the toe, as it lifts. front foot with heel just hitting the ground, and about to move smoothly with the ground.

Then maybe another pose that shows the lift and thrust forward of the back leg as it races to get in front to become the forward leg.

---

But this weekend I really need to add the official animation timeline. And maybe even persistence, now that there is something worth saving.

Been thinking more about the ARENA page (where you test your critter with other critters of your own or other players).

Basic controls:

* tap on the ground to make your critter 'go there'

* tap on another critter to make it your target

* tap one of N 'action' buttons to trigger one of N basic actions (attack1, attack2, defense, ...) (actions you might need in an RTS or FPS game)

* directly trigger any animation or pose inside the critter

* enable various autonomous actions (which boil down to the critter pushing its own remote control buttons.

But autonomous standing (with teetering poses) and autonomous walking (with good ground contact at any speed) would also be good to do first.

bla blah bla, or I can just sit around and nurse all my aches and pains.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Oh, and I know my rectangular ground contact region is NOT a good match, physics-wise, to a REEAL ground contact region, but it was so super simple to do this way, plus I think it should 'teeter' just fine!

And I'm all about the teetering.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Made some progress on Critter and made another video.

https://youtu.be/N1JhZJ57-FE


'advances' in this group

* pose slots work
* animation slots work
* basic ground contact detection works
* center of mass tracking works
* slerping from one pose to another smoothly works
* animation can play multiple keyframes in a row

But no timeline UI yet, just an 'add keyframe' button.

Since making the film, I have added a simple timeline UI and debugged some issues with timing (each keyframe comes with a duration, which is the time from the start of the keyframe until the moment the pose has completely turned into the keyframe pose. (from the previous pose). I have changed the way it determines the previous pose, and now things feel much more logical.

I need to implement a pause/resume on an animation, and then I can have a tap on the timeline auto pause it and set the play cursor to the spot you tapped and then show the exact pose at that msec.

Then some sort of single step fwd/back that maybe advances 1/30th of a second each time, as well as just a handful of playback speeds.

I'm dragging my foot on playback speeds a little, since for the auto-walk stuff to work, it has to dynamically modulate the walk oscillation. And that sort of transcends as simple a concept as 'animation speed'. But there are times when the animation is really just an animation (pick up your shoe and throw it) and for those it just needs a 'real time' setting and a couple others (slow and fast). And maybe not even fast.

But maybe the whole fast/slow thing should be keyed to the music systems bpm value.

Certainly while 'dancing'

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
This weekend I am mainly cleaning up some bugs that have accrued in the Critter stuff. Part of that was completing the CLIP/PIN ui for the BONES editor

You can now select a skeleton bone and then CLIP (or DELETE) it, leaving that bone and all its children on the 'bone clipboard'

the PIN command then attaches a copy of the 'severed limb' to the existing joint of your choice.

kinda creepy, kinda cool.

I still need a simple 'break bone and add another joint in the middle' but other than some missing functionality, I think all the stuff I have done so far is working at the same time at the moment

* WORKSPACES (8) are preserved (persistence)
* Example Skeletons (biped, q-ped, avian, centaur, dragon) are included for starting points and mostly have appropriate t-poses
* POSES (64) can now be copied and pasted between pose slots. The code can handle 'partial poses' (so I could copy just the pose 'from the waist up' to a new slot, without changing other bones it might already have. In general, I will be playing 'partial animations' with different systems owning different bone sets for the duration of their special activities.

For example, I am 99% sure critters will be component based and the head, hands, and feet will probably come from separate critters (and the head would then come with its own animations, for example.) Such animations would play independently for the most part.

Today, however, I want to work on the 'inner physics'. I have my 'design' in my head, so we'll see what happens. But basically, whenever the host game asks me to provide a posed skeleton, I first do my quaternion slerping and come up with the 'perfect book pose'. I.e. exactly what the artist asked for.

I then compute the delta between the perfect pose and the current pose and call that the 'pose error' (which is a little vector for each joint).

Using the joint masses and some scale factors, I create a 'pose error force' which pushes the joint towards the perfect pose, but just sets a velocity going.

In each subsequent frame, for each joint, I

* accumulate all forces (pose error, gravity, bone length error) into a single acceleration for the joint (another little vector)

* do the newton stuff

v = v + a * t
x = x + v*t + a*t*t

Which gives me the 'physics corrected' pose (for this moment in time) which factors in inertia and gravity while trying to reach the perfect pose (which it might not even reach before the next perfect pose is selected)

After the physics correction, comes the ground contact correction. No joint is allowed to subterranean (while in walking mode), so if the physics corrected (or artist provided) pose goes into the dirt, I just snap it back up to dirt level, even if that distorts the bones, but I will turn that into an upwards force (so if an animatino insists on driving a foot deep into the ground, I *should* turn that into a forceful accelration of the root, leaving the whole skeleton with a ballistic path upwards

Meanwhile, the host has told us that the critter is moving N meters per second 'forward' (on the pose screen, we always are walking forward), and hopefully by weekend I will be rendering some grass scrolling by to sell that speed.

Then, if autowalk is enabled, it adds the rule that 'while in contact with the ground, a joint must move exactly at the speed the ground is moving) so from the POSE it gets whether the foot is on the ground, and then, until the POSE says the foot has LEFT the ground, then the foots actual position is computed from the moment of contact position, in a straight line, at exactly the ground speed, and is repositioned exactly to there.

Once the POSE says the foot has left the ground, then it no longer includes this restriction.

And I emphasize the POSE defines foot contact, but really, since gravity will pull the root down until something DOES make ground contact, I will probably just use the current physics-corrected value's opinion of ground contact.

In my mind's eye, these seems like it could be cool and make for a natural animation that only ice skates on purpose.

But we'll see what's working by tomorrow night.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
time goes by... I still haven't fiished the auto-walk and animation stuff since I want to be sure I have the whoel lifecycle handled auto-walk to trigger point, play canned animation, autowalk back to starting pooint or end point)

so I thought I could best do that if I had enough of an Arena to try things out in. So I've switched to fleshing out the arena scene, and that's the firt time I actually had multiple critter objects living at the same time

turns out I hadn't quite thought enough about where the camera should live, so I painted myself into a corner where the 'hive' knows the big picture, but the individual crtitter knws its world xform. And I have to use the proper bit from the proper object, and now I do, but it took me hours to see what I was doing. "what? zoom looks great here, and a moment later it's zero again! what?"

Anyway, so now my arena has N rotating critters in it, and I am about to make a control panel just for this purpose.. something that would work in a variety of games. mostly third person, or firstperson where you see your own hands and such.

RTS and RPG for sure.

I'll get the basic buttons set up (row of action buttons along the bottom, plus a general steering proxy and some sort of nintendo style dpad for navigation of menus.

I'll have a little window into the soul of a given critter, showing what's on its mind, etc. and a mini map.

But I write now because of that minimap and the concept of terrain, which, of course, doesn't exist in synSpace. I don't NEED terrain in the Arena, but, of course, it would be cool if it was there (Arcadia Park, on the way!)

Which is right up there with mesh support. Something of no use in synSpace, a canvas based game. But of great interest in WoS 3D.

The other day I stumbled upon drawBitmapMesh() which is a pretty nifty canvas function I'd not known the existene of (this is why you should read through the reference materials)

For example, it turns out there is another function

drawVertices()

which can be paired with a BitmapShader() you can bind to your Paint(). And voila, your canvas app can suddenly draw textured meshes, more or less, at some speed or another (I assume it's slow).

But if I can mirror GLES, then I can be even more helpful to my host, and definitely do the multibone weighting stuff for them.

But you could totally used this in synSpace for fully textured small ships on a canvas app, and I apologize for missing this.

the documentation (from ages ago, this has been in there since api1) implies it's all buggy, and, oddly, not supported when hardware acceleration is enabled.

Anyway, I'm going to use the Arena to experiment with it. Might have to do my own 'lighting', unless the shader stuff is more fully featured than it looks. But hey, I love doing fractal surfaces driven by bitmaps! turn a single RGB bitmap into elevation, terrain, foliage maps. Make your own island.

tons of footage for yet another video, which I hope I am more brutal editing. Just quick snaps to show the evolution.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Heh, it's the Critter Dance Party in the Critter Arena! (these critters are all 'dancing' though they are also floating above the terrain)

 -

The 'terrain' is being rendered by a single call (well, two actually) to canvas.drawVertices() with an array of about 256 verts

I have to manually cull the back-facing ones, which *might* leave it fast enough to actually use. (it's faster in wireframe, but that's a very unsatisfying surface to walk on.)

Anyway, it begins to look real. I love the vertex color mixing. For a night-time planet (that's the real synSpace full game running in the 'night sky' while we chill on this planet. Feels very synSpace appropriate.

[ 08-20-2019, 08:13 AM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
So that was a fractal surface with random vertex colors and no other 'colorMap' info

Here is the same fractal surface, but including a bitmap colorMap (128x128 pixels) spanning it. Just one of the planet images from inside synSpace, and then zoomed way back (we're still in the arena)

 -

And this is closer up. You can barely make out my red skeleton in the foreground, but you can see the fractal sheet with anti-aliased color map on it. That's probably as fancy as it will get in the Arena (this drops me to 18 fps at this point, though I hope to improve that)

 -

My plan is for the game host to give me a 128x128 elevation bitmap, which I then use for elevation (including letting critters know where they can stick their feet)

then optionally another for colormap purposes at whatever resolution makes sense. And try to hack something that provides good contrast with the critters.

Then some sort of sandbox memory so you can decorate your part of the arena (presumably shared with others.. so maybe the arena is ... well, you see).

Anyway, canvas.drawVertices() is already much better than I expected (still have to cull backfaces and used a back to front rendering path for the triangles, which I haven't done yet), but probably not good enough for full skinning of all critters and terrain.

But yeah, if a star map only had one planet/star on it, I could totally afford to render that as a simple sphere (64 faces maybe) with a texture image on it. That could then rotate nicely. Certainly an OPTION on the starmap.

Ideally, its gravity would pull you in, then switch to a lunar lander game view as you zoomed close, with actual gravity pulling towards actual center of sphere, and you could (if allowed by starmap) actually land, and see your little vector ship sitting on the terrain, and your critter walks out of it? then its a critter game and it becomes Rocket Club or WoS 3D Lite :-)

[ 08-21-2019, 07:05 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I now use the fractal height map to define where the ground is (as opposed to just being eye candy), so now the critters finally know were the dirt is and can stand on it instead of randomly standing above and below it

 -

For the Math, I basically have a grid of points (some number of rows and columns) that is stretched over the full visible sheet of the world (a km or two maybe)

So, starting with your XYZ location, where we are going to find the elevation of the terrain that contains the 2D point XY (and that elevation will become the desired landing spot for the critter)

I cover the world with triangles by using two triangles per grid square. For esthetic reasons, I alternate the diagonal between squares.

I first map your XY to the row,column you are 'inside'

I keep track of your 'partial position' within that square.

I then consider the diagonal for that square and 'which side of it you are on'

this then gives me four cases to deal with, where the triangle is in the upper left, upper right, lower left, or lower right. of the square

Pretend you have this:

code:
A----------x---B
|\ * y
| \ |
| \ |
| \ |
| \ |
C--------------D

A B C and D are the four points of one grid square of the elevation map. We know the height of the ground at each of these points. (hA = height at A)

The asterisk is me, and I am standing inside the top right triangle (as viewed from above)

The x and y are my partial coordinates within the grid square (0 to 1 each)

Since the triangles all have straight edges, we can work out the height of any point ON A LINE with something as simple as this;

h1 = hA + x * (hB - hA)

so if x is .75, then we are 3/4 of the way from A to B,

I use x to find the line point heights on a horizontal edge, and the diagonal. I call them h1 and h2.

then the final heigh where I am, just needs the y value like this

h = h1 + y * (h2 - h1)

Basically the four cases are all the same, with different combinations of ABCD and a couple inverted y values.

I guess I can share the unoptimized code (theoretically this is readable)

code:
float groundLevelAtPoint( float [] fl, int oPoint )
{
if( mArenaData == null ) {
return SeaLevel;
}

// get the coords
float x = fl[ oPoint + 0 ];
float y = fl[ oPoint + 1 ]; // positive is 'north' which is opposite of terrain cell col.ro
float z = fl[ oPoint + 2 ]; // in case want to 'fall down to terrain' and have multiple terrain levels

float cellSideMeters = mapSideMeters / (terrainCols-1 );

float fCol = (x + mapSideMeters/2f) / cellSideMeters;
float fRow = ((-y * cellSideMeters * 2f) / mapSideMeters) + (terrainCols-1)/2;

int row = (int) fRow;
int col = (int) fCol;

float innerX = fCol - col; // just the percenter within each
float innerY = fRow - row;

// if I am outside the map, returmn deep ocean
if( row<0 || col<0 || row > terrainRows-1 || col > terrainCols-1 ) {
return -100f;
}

// get the height info for these grid points
int iCell = ((row * terrainCols) + col); // (row, col) is point A (top left of terrain cell)
int oCell = iCell * 4; // 4 floats per point
int right = 4; // 4 floats to the right
int down = 4 * terrainCols;
float hA = terrainVerts3D[ oCell + 2 ]; // A: top left, the Z component
float hB = terrainVerts3D[ oCell + 2 + right ]; // B: one to the right
float hC = terrainVerts3D[ oCell + 2 + down ]; // C: one below A
float hD = terrainVerts3D[ oCell + 2 + down + right ]; // D: down one, and one to the right
// basically, we linearly interpolate along the 'fold' line to find the
// height of our shadow on the fole, then use iY to scale between that and the
//

float h = SeaLevel; // return value
if( ((row&1)^(col&1)) == 0 ) {
// A-x-------B
// y * / |
// | / |
// | / * y
// C------x--D
if( innerX + innerY < 1f) {
// top left triangle
float hHorizontal = hA + innerX *(hB - hA);
float diagonal = hC + innerX *(hB - hC);
h = hHorizontal + innerY * (diagonal - hHorizontal );
} else {
// bottom right triangle
innerY = 1 - innerY; // other diagonal
float hHorizontal = hC + innerX *(hD - hC);
float diagonal = hC + innerX *(hB - hC);
h = hHorizontal + innerY * (diagonal - hHorizontal );
}
} else {
// A------x--B
// | \ * y
// | \ |
// y * \ |
// C-x-------D
if( innerX < innerY ) {
// lower left triangle
innerY = 1f - innerY; // other diagonal
float hHorizontal = hC + innerX *(hD - hC);
float diagonal = hA + innerX *(hD - hA);
h = hHorizontal + innerY * (diagonal - hHorizontal );
} else {
// uper right triangle
float hHorizontal = hA + innerX *(hB - hA);
float diagonal = hA + innerX *(hD - hA);
h = hHorizontal + innerY * (diagonal - hHorizontal );
}
}
return h;
}

Anyway, I like this technique since the individual steps are easy to understand. You could also approach it more from the equatino of a plane defined by three points, then get the distances to the three points and use some ratio rule. But this is easier for me to follow.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I'm not convinced this terrain will be truly useful, but it was fun to do and isn't that what it's all about?

It occurs to me that if I wanted to test develop my multi bone weighting code in a canvas app, I could use this terrain as my 'character mesh' and the 'animation' would basically be an earthquake, which might be sort of cool. a skeleton down inside the dirt.. hmm, that reads more than one way. This skeleton stuff has me saying all sorts of creepy things.

Anyway, now that I have dirt, I am more inspire to add some acceleration so I can drive the critters around on the surface, and THEN I can do my foot-IK stuff in a more fun environment.

Right now I am slerping linearly from pose to pose, and going very slowly, so it's kind of plodding. I need to get that a bit pepped up and more bouncy. I want these guys to really look like their scrambling around, and in contact with the dirt.

Oh, but here's something. If the critter is falling towards the ground and I tell it to go to a pose that pulls it into 'a crouch' (while it is still falling), and then uncrouch just as I hit the ground, the animation of the legs uncurling should drive them 'into the ground' a bit, and that triggers an IK to keep the feet above the ground, which might require auto-bending a knee or two, but it ALSO results in an acceleration to the root of the skeleton, which basically acts like the leg extending was a real muscle-driven 'jump', without my explicitly coding a 'jump' function. Just a natural reaction to extending your legs quickly at the right time. (and no two jumps probably will look alike..)

I think I will splurge and literally test the exact ground contact of each joint and not just use a sort of average height (so you can stand on a slope and see one foot higher than the other (and I guess my critter should automatically try to hold its head straight up, instead of off at an angle (when on a slope, I mean)). That would probably be in the 'pain minimizer' stage (where I try to relax the joints into less stressful postures, without losing the core constraints of where the head and hands need to have been delivered to achieve the critters goals (bite).

blabber blabber.

The lack of a depth buffer in drawVertices really limits how you can use it. I had to write a slightly complicated 'scanner' so as to always render the most distant triangles before the close one (close to the camera). That really broke my mind, trying to do simple 90 degree rotations of iterating over a 2D grid, since x and y swap as you go around. I finally went full linear algebra on its ass and wrote the section as if is was a change in vector basis (multiply by array basically). You can see an echo of that thinking in the above where I add 'right' and 'down' variables, which are just offsets. since there are 4 floats per vector (XYZ and W), adding four takes you to 'the point to the right' and adding 4*terrainCols just takes you to the line directly below.

so then I just had to set up a couple weird constants for each of the four orientations and then the stepping was easy.

I still need to cull back faces, if I care.

But rather than just being a fractal terrain, I was thinking, "what if I lifted up the back of the sheet, to make a sort of 'back wall of the stage' and then it seemed like maybe you could make a pretty nice 'scene' background (sort of a box with the front removed, and painted interior walls, but also some bumps. Like a room, or a cave. But just a background, and a scene-limited camera so you only see it from the nice direction.

then, just like WoS classic, the charactes would simply be 'in front of' this mesh, Sort of like those Lucas Arts games with the static backgrounds of 3D scenes and then the characters changed sizes as needed to imply they were 3D when really they are just 2D sprites in front of a nice background. Anyway, drawVertices might work pretty well for something like that.

Plus a finger-friendly 'cave editor' might be fun. Sort of a negative space, 'scoop out the pumpkin' experience.

But scenes with 3D 'stage backgrounds' (as opposed to a full world simulation) might be cool.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
It's blabberin' time!

still not sure where the terrain stuff fits in, in synSpace (like it has to!) since the performance isn't really good enough, when it has to fill the full screen. But small things here and there might work... so I might be able to do a reasonable character mesh on the skeleton, inside synSpace itself (without GLES), so long as it didn't need too many screen pixels.

like, pilot faces maybe. as 3d heads with poseable/animated eyes and teeth.

you then see them 'on the thumb' and maybe 'on top of 'your' skeleton in the arena, but never filling the screen.

That might keep the frame rate up, but I dont want to depend on it.

As scene backgrounds for scripted story delivery.. it would still slow you down, but you could turn off everything else, other than the characters wandering the stage, so it would pay for itself

That would be more of a wos-like map/scene universe, where you were a 2D vector space ship until you 'landed' at which point fade to black and your eyes open in the Gorgeen Lord's Throne Room. With a nice animated lizard king on his throne, inside his 3D throne room. Where he keeps his throne. Coz he's a king.

* Start with a picture (think WoS background image)
* It is a twoD picture
* it fills the screen (you crop it as needed)
* now you draw a rectangle
* then you PUSH/PULL/DRAG that rectangle
* the image doesn't change, but the rectangle outline remains
* repeat, new rectangles joined to existing
you are 'pushing the photo backwards, until it fills a sort of cave.

kind of like vacuum forming, or blowing up a balloon inside of a child's play house.

something within my abilities :-)

ANYWAY, so you basically are making a 3D model of the interior of a room guided by a single snapshot and your lifetime of experience and imagination.

The picture was taken 'from a specific camera angle' and you have a matching angle on the room you are 'making' so the unstretched image should perfectly fill the screen, and your goal is to place 3D points in all the interesting nooks and crannies.

If you then drag the 'live' camera away from the 'photo' camera position, we can, in theory, estimate the 'UV' values as needed to adjust the view.

Walls in the photo which are on edge to us would have grainier texturing than those which faced us in the original photo.

In theory, several photos of the same room could be used to build a composite, more evenly textured map. And then that image could be used in a real paint program to draw a brand new room, or embellish the original, which fit on the existing vacuform

forget rectangles... you just tap to drop points, and every three points is a triangle.. tap in the corners of the room (on the photo) and make triangles covering all the walls (except the front wall, so the audience can watch). 2 triangles per wall.

Then look for blocky projections from the walls (cupboards, tables, ceiling lamps). Always tracing some real element in the photo. Always from the matching camera angle.

for each point, you have to estimate 'distance from camera', but if you wiggle the live camera around, distance errors will appear as goofy texturing, so just adjust the distance until it looks right again...

Then zero processing, just hand that triangle list (OK, with the 'uv' being just the 2D coords of the photo) to drawVertices, and you can look at it from any camera angle you like.

Perhaps the 'mesh' could also be used to constrain character (and camera) motion as you might expect.

And the mesh could be larger than a single room, so you teleport to the planet like a star trek landing team and you are in an area, say, a football field inside (but an interior, like a cave or palace), so you can walk around for several screen sizes before you hit an actual wall wall. But still under 100 triangles or so total.

and by 'interior', I don't necessarily mean that literally.... might have skydome (it's just a photo, it can have anything it likes for the pixels)

Well, if nothing else, I have now suggested vacuform in both an inflating and delflating context, so it's inevitable I will end up doing *something*. Maybe something minecrafty (blocky characters that can bend smoothly). Gumby.

But, the reason I am rambling, is because I just love it when the terrain looks nice AND you can see the active synspace game 'as the night sky'

Let's see if I can get a screenshot...

 -

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
so, it looks like I am going to have a hand at the 'scene dome from photograph' editor. As one more page of the Super Power Fun Pak.

SCENE

The usual critter display with the 'skeleton' that you can orbit with a camera

but in this case, the 'skeleton' is actually a very low resolution mesh.

It starts off as a simple box facing the camera. And I put the photo as a reference image filling the screen.

You then use the normal critter controls to view the 'vertices' and move them in 3 space, until they line up with the photo.

some control to split an existing 'triangle' into sub triangles, so you can add more verts, but always have a complete 'triangle list' that paints the entire room (like a cave you are inside of).

Then you sort of 'extrude' things, and your goal is for them to look correct from all camera angles.

At least for a range of acceptable 'audience viewing angles for a WoS-like scene, only in 3D with 3D critter actors. (with chat bubbles, I guess).

And for synSpace, it would be like a WoS drop-in scene, as opposed to just taking place on 3D terrain.

But maybe it won't work at all, so I think it will be fun (bucket list) to have a quick try and see if it can 'deliver story' (probably with scripted camera control, as opposed to completely free camera by the viewer. Sudden closeups during important lines, especially if I can have *some* form of face.

probably stupid, but this is what life is for!

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
So, instead of that, I added a second terrain texture layer (by calling drawVertices a second time with a subset of the triangles (only the ones near 'you') and using a higher resolution 'noise' map.

This adds a light texturing to the ground so there are small things to 'walk past'. Otherwise you lose the sense of actual dirt, rather than you just floating along over nothingness.

But, even though I am only rendering the closest triangles, they still fill most of the screen, so it's very slow (fill-rate limits). Plus it doesn't look very excellent, but I suspect there is a better porter-duff mode I could use.

However, it worked, so that's nice. Now I have to back up and get a zillion little bugs hammered down. I finally fixed some annoyances on the animation editor screen (plus it wasn't loading the saved animations correctly, but it WAS saving them correctly, go figger.

I started editing about 8 hours of progress videos... I have it down to about 3 hours now. Which means I spent at least 5 hours (at least double that) over the weekend doing this.

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
OK, it's official, synSpace has become Rocket Club.

 -

That's my old favorite 'mars' texture on a fractal surface driven from a grayscale elevation map of part of the grand canyon.

and a three-legged critter skeleton walking around in it.

[ 09-26-2019, 04:42 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
OK, my big accomplishment this weekend was to finish editing down the development snapshots from the last 10 weekends. It's long and boring, as usual.

This one is focused on the critter terrain system.

When it's done uploading, it should appear here:

https://youtu.be/BwGT3UUNp6s

--------------------
He knows when you are sleeping.

Posts: 10875 | From: California | Registered: Dec 1998  |  IP: Logged
  This topic comprises 2 pages: 1  2   

Post New Topic  
Topic Closed  Topic Closed
Open Topic   Unfeature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:


Contact Us | Synthetic Reality

Copyright 2003 (c) Synthetic Reality Co.

Powered by UBB.classic™ 6.7.3