Synthetic Reality Forums Post New Topic  Post A Reply
my profile | directory login | register | search | faq | forum home

  next oldest topic   next newest topic
» Synthetic Reality Forums » Android Games » Rune Runner » Rune Runners Blog

   
Author Topic: Rune Runners Blog
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I ran Well of Souls: Rune Runners (Google Play Store) on my new cheap tablet and it ran very very well, and pleased me. So now I am thinking maybe it's time to start folding in the synSpace developments, so it's time to start babbling here, as well!

Considerably more RAM is available than in 2010, so I think we can get to the next level. It's still to be thought of as a 'reduced' game, where the full RPG experience is boiled down into something simpler

---

I think the first thing I would like to do is add the multiplayer stuff. Just keep it very simple. No fighting with other players, they just appear when nearby, and you can watch, but not interact.

I would also like to be able to phase in and out of multiplayer, independent of other factors. Like say you were playing SAMSYN in world IMBRYGLIA on map SANDSTORM in solo mode, and had been in solo mode all day. You tap something, and without YOU moving at all , other people start to appear (people who are also on that MAP and who have also opted for MP). If you opt out, you just fade away.

As far as the lua-scripted quest experience goes then, that has to start and stop with your entrance to a map, and that needs to be independent of other players. So your script mainly talks to the local player, which is also the case in synSpace of course.

Right now the game offers a list of maps, being the maps you have unlocked in the world. (the only world at present being the small incomplete one that comes with the game. Even more incomplete than Evergreen). But as I design what it means to be 'a world' I find I want to take a map centric view, and say "one map, one lua script" and have them appear pretty much as individual thiings in a big scrolling list of 'maps you own'

But, like synSpace, I want that list to also include 'maps other people own, who are playing them right now, and from whome you could get a copy automatically by joining in)

And, unlike synSpace.. well.. maybe synSpace will do this someday, if you need a 'collection of maps' to be a unified 'world', then you make them a 'campaign' (where they share nonvolatile player progress data) and the UI somehow groups them/filters them appropriately so a player of a WORLD can usually just be looking at maps in that WORLD

But since I still want that list MAP-centric, I want you to be able to switch to any MAP on the list, WITH YOUR CURRENT CHARACTER.

So instead of picking WORLD first, then CHARACTER. I want to pick CHARACTER first, then WORLD (actually then MAP, with world just implied).

Which means SAMSYN can simultaneously be a level 1 newb in one world, but a level whatever in another. But my 'FACE' and other personalizations go with my CHARACTER (of which I can still have multiple, I just don't actually NEED as many, especially if I like to use the same names in lots of worlds).

Which means I have to break player data into two pieces with the common bit stored outside the maps, and the map-specific bit stored inside the maps, as it were. All in my SQLite tables no doubt.

Unfortunately, I can't do any of that until after I port the code to Android Studio (from Eclipse) and work through those issues. And I fear that, which will slow me down.

---

But once I get over that hump:

* add the NETWORK layer so I can talk to new linux servers

* add the MIXTERCLIENT so player-to-player data xgfer can occur

* configure a server to support it (same server as WarPath and Drone Runners, but another instance)

* establish starting state xfer protocol, with a simple "I am HERE, and wearing THIS" packet sent at regular intervals by each player on the server. If their HERE is close to yours, I render them appropriately.

* merge in the ASSET/COLLECTIBLE modules, and define the appropriate MAP assets.

* merge in the threaded-lua Module

* wire up scripted control of characters and environment (that'll take awhile)

* Add social/personalization options

* Add multiplayer synched battles/scenes

* add hosting (one player hosts anything that needs to be seen by multiple players - scenes or bots)

* come up with the right blend of editors for skeltons, poses, animations, effects, particles, spells, weapons, sounds, teeth, ears, tails, skulls, wings, skins, etc. As needed to easily add critter and stationary (trees, boxes) objects. And probably cut that list down to the basics

----

For example, I will probably be the only skeleton 'author' initially (providing a handful), but provide a pose/animation 'studio' since crafting cool animations that support specific story lines on your map is a priority. You don't just walk on stage, you SAUNTER, or LIMP, or TRIP, or whatever.

You should be able to craft a TELLTALE 'tale' out of it, albeit low resolution.... Ideally, I mean. *I just played episode one of MineCraft:StoryMode and was very excited. (not so much the mining, but the story telling).

Terrific Voice Acting helps, of course. I was idly thinking of using the vocoder to let you record some charley brown adult narration to the world, as your characters spoke in chat bubbles. Or just let google read it aloud. But the charley brown voice would let you sort of give an attitude/emotion to the narration, if not actual intelligibility. (and the idea would be it would be no more bandwidth than the equivalent amount of text, so it could be baked into the map script where it was used.)

---
today I was too busy with day job to actually do anything but blabber. But I fixed my build issues (day job) finally, so that's a relief. I switched to a new branch awhile back and confidently deleted the old one, without realizing I needed a bit of it for the new one to work. Plus some other facts of life had changed in the interim, so it was confusing and broken for a bit, and that led to some extra stress. All better now.

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Hmmm, an organizational blabber

Things it would be nice to have done someday

* skeletally animated critters in WoS (Windows and Android, Rocket Club, WoS, Rune Runners, and Farmy), Ideally using a common model creation technology, but rendered by two different engines (OpenGLES and DX11)

* scripted adventures/battles in Windows and Android (WoS, RR, RC, and Drone Runners), again, using a semi common scripting library using lua

That's about it, isn't it? That's all I ever talk about. Critters walking around and fighting, in a scenario designed by a fellow player.

Well, and music

I should be more story-centric, and only code on the engine when I need more support for my story.

So, whats a story?

====
Here's a story. Internet Lawyers, please make me rich!

So, this guy and this lady work for this company, doing the same thing. One day, the boss shows up and says, I'm getting old and tired, and I need a replacement (maybe that's the title "The Replacement")

He continues, "I think either one of you could do it, so really it's going to boil down to which of you can train your own replacement first. First to do that, get's the job.

So, the competitive couple immediately start scheming to come up with a replacement first, sabotaging each other's efforts and such, and just generally informing the audience that these re not nice people, so it's OK that something bad will likely happen to them

We meet the replacements, and there is some romantic fiddle faddle between the four of them

and after about one half movie's worth of that, they both announce to the boss their candidate is ready.

And the boss comes up with some ridiculous test ("sell the magursky account!") and the interns head off to compete to do that, but, of course, the audience sees them work more as a team (because they are already falling in love, of course) and so they manage to either both fail or both win, whichever pulls the most tears (film both options and test!)

So now the boss needs to pick his replacement and, it's all fake. the boss is not retiring, he just wanted to replace both of these people (and now we sympathise) and so the interns get the jobs and the other guys....

<shop for embedded marketing deals, then insert regionally appropriate ending here>

How would YOU end the story?

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
felty
Verbose Member
Member # 6175

Member Rated:
5
Icon 1 posted      Profile for felty     Send New Private Message       Edit/Delete Post 
Probably would be better to use OpenGL and OpenGLES -- better yet, stick to a GLES compatible subset for the desktop version, then it stands a chance of being easier to get compiling on Linux or Mac in the slim event that such a feat is ever attempted.
Posts: 1144 | Registered: May 2005  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I think I just spent two days of constant recompilation just getting a single line segment to render in the appropriate place :-) (indicating direction of light). Imagine if I needed to do TWO lines.

I'm wading in a soup of old school OpenGL, new GLESAndroid, DX8, DX9, and DX11 at the moment, and not yet really grokking the commonality at the level I could write a solid intermediate API that hid the details without recreating massive numbers of details. I think energy should get focussed on DX11 and GLES with as much done in a generic middle layer as possible, but probably lots of api-spcific hacks that would need to be tweaked in a platform change. So, like, rendering a posed model with materials should be in the middleware, but setting up the camera stuff is probably two separate custom routines, only one of which gets written to start with. Someting like that. START with a middleware lib (with no content), then always try to add new functionality there first when possible. Or, move it there eventually if you develop it elsewhere.

But I find that if you KNOW you need to move the code, it's always better to move it sooner rather than later. Amazing amount of inertia in glu code.

Today my math problem is the generation of 'tangent' and 'binormal' unit vectors, to complete (with the 'normal') the vector basis of a single vert, for use in tangent map application. To do this, the 'hard part' is just finding lines that are parallel to the UV coordinate axes, at that vertex.

To do that, you start with some verts (the three verts of a single triangle, in my case). For each of them, you know an XYZ position and a UV texture coordinate. Since the meshes are often not flat, the flat texture gets mapped to a curvy surface, plus it can be rotated, so the direction of U and V in model space can be unusual.

But you know the exact values for 3 verts, and with that you can solve a system of linear equations which give you U and V without much work at all, other than gimbal lock issues near certain angles.

In my case, I am only doing this once in my 'importer' so it doesn't have to be particularly fast, so I think it will be:

for every triangle
for each vert in triangle
solve the XYZUV equations
compute normal, tangent, and binormal
add them to integrating values for each vert

followed by a second pass where you normalize all those integrating sums. This way the normals are affected by all the faces touching that vert, and not just one of them.

There. Sounds so easy when you put it that way. Let's see if I can really solve the system of equations...

I have three verts of a triangle: V1, V2, V3
they have XYZ and UV components, named X1, U2, etc

the difference between vert 1 and 2 gives us

(XYZ2 - XYZ1)

And we know that maps, in another space, to the difference between the UV components

(UV2 - UV1)

We know these are linear in both spaces, though I won't try to prove that.

But basically I am going to say the ratio between the V1V2 and V1V3 deltas will be 'the same' in both spaces. so

((XYZ2-XYZ1)/(XYZ3-XYZ1)) = ((UV2-UV1)/(UV3-UV1))

we need to avoid dividing by zero, so the verts cannot share locations, and the UVs should also be non zero. If you have 0 for UV3-UV1 here, you'll have to do something else (maybe invert both sides)

some test data, our 3 verts. UV is mapped to a sub field (X,Y,Z)(U,V)
code:
  V2 (0, 10, 0)(.5,1)
| \
| \
V1--------V3(10, 0, 0)(1, .5)
(0,0,0)(.5,.5)

So, we want to solve for XYZ where UV=(0,1) and UV=(1,0)

and we feel
((XYZ2-XYZ1)/(XYZ3-XYZ1)) = ((UV2-UV1)/(UV3-UV1))

V2-V1 gives us a delta of (0,10,0)(0, .5)
V3-V1 gives us a delta of (10,0,0)(.5, 0)

from the XYZ lengths, we know that 10 units in XYZ space spans 0.5 units of V in UV space. And cinse it has no effect on U, we can infer U is independent of Y so we're almost done already, just multiply by 2 to get the UV to (0,1) giving us (0,20,0) as the delta that is parallel to U axis.

We already have the normal, so we would probably cross product here, rather than independently computing the binormal from V3-V1, but I think I will do both as a means to vette the code.

But what if the UV is rotated a bit instead of just offset.

code:
  V2 (0, 10, 0)(.2,1)
| \
| \
V1--------V3(10, 0, 0)(1, .7)
(0,0,0)(.5,.5)

OK, so
V2-V1 gives us delta (0,10,0)(-.3, .5)
V3-V1 gives us delta (10,0,0)(.5, .2)

so a distance of 10 in XYZ took us about .6 in UV

So, we seek. a new XYZ, which is a unit vector, which maps to, (0,1) in UV space. It will then be our 'tangent' We know that a change in Y now affects both U and V. So we will have 2 equations

U = aX+bY+cZ
V = dX+eY+fZ

plugging in the known

V2-V1 gives us delta (0,10,0)(-.3, .5)
V3-V1 gives us delta (10,0,0)(.5, .2)

-.3 = a0+b10+c0
.5 = d0+e10+f0
.5 = a10+b0+c0
.2 = d10+e0+f0

-.3 = b10
.5 = e10
.5 = a10
.2 = d10

so
a = .05
b = -.03
c = ?, let's say 0
d = .02
e = .05
f = ?, let's say 0

Now we want the XYZ of UV (0,1)
0 = 0.05*X - 0.03*Y + 0*Z
1 = 0.02*X + 0.05*Y + 0*Z

since z is out of the mix, just 2 equations with 2 unknowns

0 = 0.05*X - 0.03*Y
1 = 0.02*X + 0.05*Y

0 = 0.25*X - 0.15*Y
1 = 0.06*X + 0.15*Y

1 = 0.31*X
X = 1/0.31
X = 3.225
Y = 5.376

So for every change in XY of (3.25, ), UV moves just along the V axis a distance of (0,1), hence the normalization of XY(Z) is our 'tangent'

Looking at our original picture... OK, I will draw it on graph paper

And, .... that's wrong.

OK, I see I started off wrong. With V1, V2, V3, I have THREE example deltas V2-V1, V3-V1, and V3-V2. With two equations each I should get the six I need to solve for the six unknowns

Making a new example with nicer UV mapping for testing, I come up with

U = .06X + 0.04Y
V = -.04X + 0.06Y

So I should be able to predic the UV of a random XY in that space. Say (5, 5, 0)

U = .06*5 + 0.04*5
V = -.04 * 5 + 0.06 * 5

which I thnk is

U = 0.5
V = 0.1

Which on my graph paper, looks...wrong. I expected .6, .6

But I think I'm on the right track, just have to be a little more rigorous..

Start with a fresh piece of paper :-)

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
from https://stackoverflow.com/questions/5255806/how-to-calculate-tangent-and-binormal

code:
The relevant input data to your problem are the texture coordinates. 
Tangent and Binormal are vectors locally parallel to the object's surface.
And in the case of normal mapping they're describing the local orientation of the normal texture.

So you have to calculate the direction (in the model's space) in which the texturing vectors point.
Say you have a triangle ABC, with texture coordinates HKL.
This gives us vectors:

D = B-A
E = C-A

F = K-H
G = L-H
Now we want to express D and E in terms of tangent space T, U, i.e.

D = F.s * T + F.t * U
E = G.s * T + G.t * U
This is a system of linear equations with 6 unknowns and 6 equations, it can be written as

| D.x D.y D.z | | F.s F.t | | T.x T.y T.z |
| | = | | | |
| E.x E.y E.z | | G.s G.t | | U.x U.y U.z |

Inverting the FG matrix yields

| T.x T.y T.z | 1 | G.t -F.t | | D.x D.y D.z |
| | = ----------------- | | | |
| U.x U.y U.z | F.s G.t - F.t G.s | -G.s F.s | | E.x E.y E.z |

Together with the vertex normal T and U form a local space basis, called the tangent space, described by the matrix

| T.x U.x N.x |
| T.y U.y N.y |
| T.z U.z N.z |

Transforming from tangent space into object space. To do lighting calculations one needs the inverse of this.

With a little bit of exercise one finds:

T' = T - (N·T) N
U' = U - (N·U) N - (T'·U) T'

Normalizing the vectors T' and U', calling them tangent and binormal we obtain the matrix transforming from object
into tangent space, where we do the lighting:

| T'.x T'.y T'.z |
| U'.x U'.y U'.z |
| N.x N.y N.z |

We store T' and U' them together with the vertex normal as a part of the model's geometry (as vertex attributes),
so that we can use them in the shader for lighting calculations. I repeat:
You don't determine tangent and binormal in the shader, you precompute them and store them as part of the model's geometry (just like normals).

(The notation between the vertical bars above are all matrices,
never determinants, which normally use vertical bars instead of brackets in their notation.)



[ 08-21-2017, 11:16 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I think maybe at this point, I am done:

code:
| T.x T.y T.z |           1         |  G.t  -F.t | | D.x D.y D.z |
| | = ----------------- | | | |
| U.x U.y U.z | F.s G.t - F.t G.s | -G.s F.s | | E.x E.y E.z |

Together with the vertex normal T and U form a local space basis, called the tangent space, described by the matrix

| T.x U.x N.x |
| T.y U.y N.y |
| T.z U.z N.z |

Where I then pass T as the tangent, and U as the binormal, and try not to confuse s/t with u/v in the texture coords.

so a scaled array multiplication gives me what I want. Cool.

sigh, I really need to move my email...

code:
on second thought, I think I also need to do the inverstion, but it's not as bad as it first looked


calculateTangent( int numVerts, VERT* verts, TRIPLE *tangents )
{
int numFaces = whatever();
for(int f=0; f<numFaces; f++ ) {
// for each face
TRIPLE N; // we have a face Normal already

for( int v=0; v<numVerts;v++) {
// for each vert of that face
// get two edges

Triple D, E; // XYZ edges
D.set( x1-x0, y1-y0, z1-z0 );
E.set( x2-x0, y2-y0, z2-z0 );

Triple F, G; // UV edges uv=xy (ignore z)
F.set( u1-u0, v1-v0, 1 );
G.set( u2-u0, v2-v0, 1 );

// now we do this bit
//| T.x T.y T.z | 1 | G.t -F.t //| | D.x D.y D.z |
//| | = ----------------- | //| | |
//| U.x U.y U.z | F.s G.t - F.t G.s | -G.s F.s //| | E.x E.y E.z |

// that scale factor (where G.xy is G.uv is G.st)

float denominator = (F.x * G.y - F.y * G.x);
if( denominator == 0 ) {
prepare to fail, try a different edge?
}
float k = 1.0 / denominator ;

// compute T and U from edge data
TRIPLE T, U; // our face tangent and binormal
T.x = k * ( G.y * D.x - F.y * E.x );
T.y = k * ( G.y * D.y - F.y * E.y );
T.z = k * ( G.y * D.z - F.y * E.z );

U.x = k * (-G.x * D.x + F.x * E.x );
U.y = k * (-G.x * D.y + F.x * E.y );
U.z = k * (-G.x * D.z + F.x * E.z );

// in theory T x N should give me U.
// so I could check my work here
if( dist(U, TxN) > epsilon ) error!;

// but it looks like I need the inverses
Triple Ti, Ui; // these will be final tangent, binormal passed in geometry

//T' = T - (N·T) N
//U' = U - (N·U) N - (T'·U) T'

float NT = dot( N, T);
float NU = dot( N, U);
Ti.x = T.x - NT * N.x;
Ti.y = T.y - NT * N.y;
Ti.z = T.z - NT * N.z;
float TiU = dot( Ti, U );
Ui.x = U.x - (NU * N.x) - ( Tiu * Ti.x);
Ui.y = U.y - (NU * N.y) - ( Tiu * Ti.y);
Ui.z = U.z - (NU * N.z) - ( Tiu * Ti.z);

Ti.normalize();
Ui.normalize();

// do some more error checking to prove
// N, Ti, Ui are unit and mutually orthogonal
// test this with some screwy UV mapping on normal map,
//like upside down mapping should NOT invert bumps.

// should adjacent faces have equal votes (by
// aggregating normalized face normals) or
// should larger edges have more say?

// perhaps deterministically weighted. anyway...

// aggregate it with faces sharing verts
verts[ vi ].tangent += Ti; // maybe reversed
verts[ vi ].binormal += Ui;
}
}
// second pass normalizes
for( int v=0; v<numVerts;v++) {
// for each vert of that face
verts[ v ].tangent.normalize();
verts[ v ].binormal.normalize();
}
}

fingers crossed that travels well... needs to test for 0 determinant first (and then what? choose different edge pair?). I only use a Triple for UV since it is an existing dataype. but I wimp out and do my own array multiply in line to account for the mismatched sizes of things. And to be lazy, I mean. or stupid. but that shape passes the smell test. the correct answer will look roughly like that, with some error checking and optimizations.

[ 08-22-2017, 01:24 AM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Oh my, I already forgot about that digression. that turned out to be just the tip of the iceberg of changes needed. I'm still finding bits of code that were not completed 12 years ago (when we paid for it)

But getting back to WoS:RR

I made a short youtube film to document where the game is today. And once again my camera lied to me and said it was filming in landscape, when really it was doing portrait. Actually, I suddenly understand why this seems to be the case.. because at the angle I shoot at (straight down), both actually look the same.

Plus, I just uploaded like a full gig to youtube for a 10 minute film and let them do the conversion to something stream friendly. how lazy is that? But they probably get that a lot. Also, I let them recode the film in landscape. And yes, I have tools that could have done this before uploading, to everone's benefit, except my lazy gene.

now I need to relearn the logic of converting from 4x4 matrices to 4 value quaternions, because it looks like the collada code I am using is doing that incorrectly. I know it was never tested, because I just fixed a bug where it was looking at completely the wrong data. And I don't mean that accusationally. I'm sure I leave behind time bombs all the time.

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I also have unfinished collada importer business to deal with, but this is the Rune Runner blog, so I now blog.

Last month I had a disk drive disaster that 'lost' the sources to all my android projects, with the only backups I could find were 2016. While I have been very low productivity as a rule, that was still a depressing step backwards for synSpace: Drone Runners (which the canny reader will remember is really a test bed for new features in WoS: Rune Runners0

But thanks to various unearned miracles, I was able to recover all the source files from the bad drive, and now all my android stuff is mounted on my new PC (well, 2 years old by now, but only recently becoming the primary.

The canny reader adds "But weren't your Android projects all based on Eclipse, and in need of being ported to Android Studio because Eclipse was no longer supported by Google?"

And yes, that too is something I had been putting off, though I still hadn't done anything about it 2-3 years after Google removed support.

I had many fears about the process, because I had such a weak understanding of 'what the heck Eclipse was doing' (which I think just means 'all those ant files'). I never learned ant.

I had other fears, like "is the lua and PlayStore support the same, still available?"

But the disaster forced me to confront the issue, and as you might have seen in another post, Android Studio has done a good job of handling people in my situation, making the conversion a 2 click automated process.

After which, you have to react to various error messages, all of which have been encountered on StackExchange and you can usually puzzle out some bit of Gradle code (I already know more gradle than I ever knew of ant) that fixes it -- usually by upgrading to the newest version of this or that piece of the system.

The two most pernicious problems porting all my android apps turned out to be bugs in the intel accelerated emulator (which I think could be fixed by a version upgrade, but I haven't stumbled on that yet). And I'm not sure that wasn't a pre-existing condition, since I never used the emulator on Eclipse, since it was so slow.
---

So, Rune Runner builds on Android Studio now, and runs on both the emulator (with a reasonable frame rate), and on a real device (Kindle Fire).

And I even fixed an old bug which would oddly clip text on various overlap panels. I thought it was some subtle flaw in my open gl code that mapped only the left side of the geometry to the quad's texture image.

But finally I realized it wasn't the copying of the image from the bitmap to openGl buffer, it was the actual composition of the bitmap, where I had an inadvertent clip rect set up and the missing text was just being clipped by that. Clip rects have burned me a few times, and I am slow to grok what is happening.

So I played through all the content (and I'm sure I had more content than this. I know it was possible to get the bird mount via quests on the Mount Diablo map.) and wished there were more. So that's a good sign.

My hands didn't burn out on the UI, and I have an excellent tutorial in there (which presumably will be completely re-written in lua)

Aside from Lua, I need the right sort of creature editor. Can't demand 3dsMax or Maya to be a creature designer. Currently it handles MilkShape (as used by Rocket Club). And since Blender has a MilkShape Exporter.... Probably I will focus more on blender.

To get that right means that exporter has to do the right thing for skeleton stuff, and animation poses. With as much opportunity as possible to take part at each level ("I just like to do skeletons" "I just like to do models" "I just like to paint textures" "I just like to name things" "I just like to make up stats" "I like to think up new diseases" "I like to write dialog and make the reader care" "), so the ability to share a Blender file for collaborative work, seems nice.

So, trying to keep it 'asset driven' there would a blender/milkshape file for each character. It could include the mesh, material properties, texture references, skeleton, and animation keyframes (joint angles). Also one or more image assets to drive the various 'maps' But all hit with the 'tiny' hammer limiting asset quality just out of spite.

Ultimately, one more file .. a 'collectible' form new to Rune Runner would be a text file that defined the character stats, called out which blender file to use, which texture files, and the game-specific stats of the character.

This should be editable/clonable, as per other collectible forms, so you can 'make a new wolf based on an old wolf' where maybe you just change the name (inside collectible), or one of the texture images, or the max hitpoints, or the names of things it might drop when vanquished in battle.

So in the simplest case... a map could be put into 'edit mode' by the current moderator, after which designated players could drop creatures 'here and there' and fine tune them as needed.

Quests should then be written offline with a nice keyboard, and use placeholder NPC IDs, then if you are in world, in edit mode, adjusting an NPC, you can also declare it to be the holder of role X of Quest Y. The the game UI offers the player all suitable quests when chatting with that NPC.

OK, that doesn't feel particularly easier than just doing it all in a massive text file ('the map') with some collectible models and images. But I have this unscratched itch to make it super easy to add 'one little thing' to an existing world, without having to know much about the rest of the world. Possibly that's a disasterously incorrect goal.

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
OK, let's focus on Rune Runner being a story telling game where a single author/group controls the whole map/ Hopefully they can add models and textures collected from other maps. But at the end of the day there is a master file and only the quests and monsters in that file will be seen by the players on that map.

so it can be a goal that it is easy for a map designer to add one new monster to their OWN map, it doesn't have to be possible to do it on others.

----

Ultimately a map file will be a large text file with loads of lua in it. With re-bindable references to other assets (bitmaps, mainly, and models).

Monster stats to be handled like Drone Runners bots and scenes, as lua tables that inherit from other lua tables.

Hmmm, I guess if I wanted to use exactly the same monster on a second map, I would have to copy and paste its lua table into the second map file. And players might well appreciate lots of novelty per map. But not sure now if the synSpace 'campaign' metaphor works in the WoS 'world is a set of maps' setup.

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
I was able to resurrect my 'debug key' over the weekend, so now I can install new builds on my test devices without having to blow away the prior install (losing all data).

This required grabbing the "Users" folder from my broken disk drive, and while it is doing a good job at that, it claims it will take over a week to complete (damaged regions are read repeatedly until they work, leading to extremely low data xfer rates in bad regions of the disk)

But I eventually got the small file needed and google showed me where to put it.

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Here's an early screen shot of my Critter Skeleton editor. Not much of an editor yet, but has a nice camera and a basic biped to get started.

 -

This is inside synSpace: Drone Runners, of course. The idea being this is another 'collectible' type (like SHELL or FACE) which you noodle together when you have nothing better to do.

In theory, the code is reusable in all my android games, but was intended mainly to drive the animation for creatures in WoS:Rune Runners (eventually), but for now to do some RTS-like mini-games as StarMaps. (think: you're flying your spaceship when suddenly a gigantic god biped appears and starts trying to step on you as you shoot at it.) Or spider monsters after landing on a planet. That sort of thing.

I want my 'Critter' class to completely contain the skeleton/pose/animation(and particles?), but not the skinning/rendering (which vary with the host environment). I would like it to automaically handle things like "matching the animation of legs to move appropriately to stay in ground contact and not 'ice skate'"

Plus some IK to keep feet above ground, and maybe even a little 'I will actually fall over if you pose my center of mass outside of my ground contact 'shadow'

As mentioned elsewhere, I would like a sense of urgency when the critter detects an imbalance and it is driven to move one of its feet in such a way as to keep the center of mass supported. i.e. learn to walk, or at least 'fall in a desired direction'

likewise, if animations are all a little loose, with some physics like gravity and springy bones/joints, so that the same animation looks a little different depending on local circumstances.

Again, for now, this is for very tiny RTS 'units' in sS:DR which are literally rendered as little skeletons.

----
And I'd like to draw attention to those simple 'bone outlines' that connect the ball of the joint to the tip of the bone. Those lines magically follow the 'surface tangent' of the circle no matter how you move the camera or joint.

I'll pause while you consider how YOU would do that.

I did it like this:

Starting with the 3D points at the center of the joint and the tip of the bone, and a radius for the joint 'circle' (in meters, just like the skeleton definition)

I map both points to my 2D screen (using current camera settings) to get two (x,y) pixel location values. and the radius converted to pixels as well. I render a 2D circle of that radius centered on one point.

Then, making a vector from joint to tip, I scale it by its length to make a unit vector (ok, this costs me a square root, it's not THAT cool). Then if you think of the itty bitty xy values of this unit vector and then swap them (yx), you get a unit vector at right angles to the first. If you then negate them (-y -x), you get the 'opposite' right angle. Those turn out to be perfect contact points for this '3d contour' solution

In theory, you could use this to make lines from edge of joint to edge of next joint (instead of to bone tip) for a sort of '2d skin' following the joints, and whose radius could be adjusted per joint.

But I just wanted a simple bone visualization that wasn't too expensive and didn't have too many thousands of overlapping lines to look through.

----

to keep it portable, I have it do all its own 'manager' stuff (like stepping its own little physics system) and that required its own render system (menus and 3d picking/editing) which might be in conflict with the host environment. So basically, it's all canvas related. I render to your canvas.

It might be the actual screen (for a 2D game like synSpace) or it might be to an off screen bitmap you then push into open gl and render as a quad in the plane of the display. You configure (one of) these as your official manager for all critters in the game. Perhaps it even plays nice with my reusable network object (to distribute player-initiated changes, preferably as queued animations).

The host provides a touch interface which I hope to be my reusable touchhelper but that can translate in its head if the offscreen bitmap is a different resolution than the screeen. Pretty sure I already did that in WoS:Rune Runners, in a limited fashion.

I get a perverse thrill out of how deep this menu system could go. I could hide all sorts of easter eggs.... menus that look exactly like others, but aren't.

I needed some test data and I thought: "Hey, I have this vocoder that already turns live sound into real time transcription of pitch and volume. Couldn't I use that to steer some joint movement in the little skeleton I have on display in my editor?" Like map certain notes or chords to different poses, and then slerp towards whichever ones occur, changing direction as the notes change, but otherwise moving smoothly between poses.

Poses you picked ahead of time to convey an appropriaate movement. In general, I want the UI to offer a lot of slerping (that's even better than lerping, don't you know?), and have you pick two poses and then have a slider to blend between the two. And then use THAT as the actual pose (say for an animation keyframe). And while you had these two poses on the test stand, you could maybe click to enable which joints each pose contributed.

So you might want to make a sequence of 8 running poses that only differ in what the right arm is doing, which is then taken from another pose (or set of poses, or separately running animation)

And, while I am vaporware-ing, there should always be one (or several) animations running, whose effects all blend together. Plus physics interactions like IK pushback from collisions, and springy recovery from falling.

Like you knock it down somehow and it wants to get back up, it know what pose it wants to get to, but rather than play a canned animation of standing up (which is possible, but more work for the host), it has to sort of figure it out through a series of small 'legal' changes in pose as it tries to minimize differences between current and ideal poses.

each change contributes to a joint velocity. joints have mass. there is a center of mass. it casts a shadow on a ground plane (provided by host if not obvious) and the joints in contact with the ground form the 'stable ground area polygon/oval' (I was thinking maybe an oval where the foci were the two feet, so to speak, then several ovals in the case of more feet) If the shadow of the center of mass is inside of any oval, then you are stable and not falling down

(I know I am repeating, but this is still evolving for me)

so the critter knows 'this is an unstable pose, but I also know things are in motion and might get better before they get worse, so maybe I also track whether things are getting better or worse...'

But it doesn't necessarily go out of its way to fall down (perhaps it could), it just maintains a stress value (and direction?) the host can use to either steer the critter or just note it is in pain of some kind (drain hp?)

Probably a decision for a critterbrain VQ, but one decision might be to 'pick a foot which could be moved to change the offending oval.' and then start making that change (self animating).

While standing in a stable environment, maybe intentionally wander the center of mass within the ovals. Seeking the most comfortable position, but also staying 'alive' (but shades of classic early RPG characters bouncing back and forth while awaiting your attack)

Implement 'tics' (butt scratch, head scratch, arm scratch. All sorts of scratches. ) Maybe with target nodes... so the animation of the left hand can seek out the location of the right foot... and IK bend them until contact, while obeying all joint angle/stress constraints, but possibly wobbling while it does it.

Anyway, a 'tic' would be.. an animation, with key frames targeting 'from this joint down' (but could it still pull all the way to root in the case of sever IK dissonance?)

to clarify. Rather than solve for the perfect solution, and then settings the joints to that straight away, I want to only detect 'difference between what I want and what I have' and maybe 'is that difference increasing or decreasing?' and then make a micro decision to make things a teensy bit better, and then do that at 30 fps and hope it looks 'more interesting than a canned playback' along with feeling 'more connected to the environment'. Emergent behaviour, or the illusion thereof.

I'm living the dream. One minute at a time.

[ 06-26-2019, 06:21 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
And maybe this for a 'skin' :-)

 -

[ 06-26-2019, 03:32 PM: Message edited by: samsyn ]

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
still struggling with a couple of bugs, but I made my little control panel for skeleton creation (separate from posing and animating)

 -

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
This is how you know I'm a little crazy

 -

Yes, I added red-blue 3D glasses support to synSpace critter skeletons. I mean, here they were, little stick figures. Like, the perfect thing for red green.

Sadly it is very very dim. maybe a thicker line would look brighter. I need to render the colors 'additively' and the red is naturally much brighter than the blue, so I have to cut it way back and the result is a pretty dark purple.

But hey, it works! And no flicker! (which is to say initially I had flicker, but just fixed that).

I think it is actually justifiable for the pose/animator system since it can be helpful to gauge that sort of thing without having to rotate the camera constantly.

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
samsyn
Administrator
Member # 162

Member Rated:
4
Icon 1 posted      Profile for samsyn   Author's Homepage   Email samsyn   Send New Private Message       Edit/Delete Post 
Here's a slightly nicer version with no flicker, and a much brighter skeleton, and the left/right set for a more normal 'red lens on left eye' view

 -

Ideally, I would start with black and then render red and blue 'additively', but to simulate that here I start with black, render solid cyan as bright as I can, then overlay that with alpha red that lets the blue through to create 'white' (gray) in the overlap. If the overlap looks bluish or reddish, then it needs to be adjusted.

--------------------
He knows when you are sleeping.

Posts: 10697 | From: California | Registered: Dec 1998  |  IP: Logged
Hesacon
Obsessive Member
Member # 3724

Member Rated:
4
Icon 1 posted      Profile for Hesacon   Author's Homepage   Email Hesacon   Send New Private Message       Edit/Delete Post 
Can the skeleton dance?

--------------------
SoV: Exalted Devout Oracle | World Developer | The Black Guard
Outside is just a prank older kids tell younger kids at Internet Camp

Posts: 9485 | From: NY | Registered: Apr 2003  |  IP: Logged
   

Quick Reply
Message:

HTML is not enabled.
UBB Code™ is enabled.

Instant Graemlins
   


Post New Topic  Post A Reply Close Topic   Feature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:


Contact Us | Synthetic Reality

Copyright 2003 (c) Synthetic Reality Co.

Powered by UBB.classic™ 6.7.3