kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Jul 15, 2018 10:53:16 GMT -5
Semi Progress?Because of changes in the way I can export with .dmf versus other options I wanted to make a copy of my mml2 pc mesh page, simplifying the code and allowing me to add a few fixes. There are still a few bugs that exists in this new version, compared to the older version. Most notably bones not being in the correct order as pictured below: But when models do work, they look pretty good. And I managed to rewrite the left bar so that images are not included with the model results. Managed to get meshes mostly working. Managed to get animations working. And managed to get textures mostly working in a short period of time. So it's mostly a matter of narrowing down what needs to be fixed. 1. Need to debug the bone order. Specifically Megaman's cut scene mesh which is the first model listed in ST03.DAT. (Edit: fixed. Turned out was in the primitive list I put the continue statement before the increment for primitives that didn't need to be drawn. So the wrong bones were getting attached to the wrong polygons.) 2. The wrong textures are mapping on a few occasions. Need to try and figure out why. And then I also need to add in the options to override my model to texture file mapping, because the game likes to jump around with which file is referred to for textures. (Edit: Semi fixed. The issue was that that I was matching textures outside of the 256x256 framebuffer area that the model was connected to. There are still issues when the model is linked to the wrong texture file, but it works pretty well when linked to the correct one. Since there's no easy way t map all of the textures, I still need to implement an manual setting for some of the textures.) 3. Need to look into animations. Rotations are working, but the position of the root bone is gone. Need to find animations that are easy to judge distance, like Roll falling down and Roll kneeling on the ground to try and find the metrics used for setting distance. (Edit: Only the root bone has position, then from the root bone for every bone is the rotation component of the animation. For the root bone position, the values provided are relative to the root bone. So the position value is added, not replaced. There's also a magnitude component for the last two bits of the position vector. It goes in factors of 2, so 0 is 1x, 1 is 2x, 2 is 4x and 3 is 8x) 4. Kind of debating on how much time to spend on this, but the PL02P000.DAT files, are likely models for the more important characters. Since I already exported the in game models, I'm tempted to be lazy and put this off. But I'll have to poke and prod at these files to see if I can find anything interesting. (Edit: Not sure how much time I want to look at other file formats for MML2 atm. Going to start focus on Blender. I could try adding analysis into a later post.) On the Blender side of things, I got the option to import a .dmf file displayed in the memu. Doesn't actually do anything yet, but it is something. Code looks like this for a basic template: pastebin.com/RAC05kQQ Next step is to start looking into the information that gets passed into the function and start looking into the data structs provided in the bpy module. Blender Addon ContinuedI think I've managed to get a working outline for a Blender plugin. Not in terms of functionality, but at least the ability to get a menu item added to the input and export menus respectively. And have an approach for writing the addons themselves. Currently the project has three files in one directory that look like the following. __init__.py export_dmf.py import_dmf.py
The __init__.py is where we initialize the functionality we want. Specifically getting the menu items added. Currently it looks something like this. Can't say I've very familiar with python, but one issue I have with it, is that as far as I know there are no function prototypes to be able to declare a function and define it later. Everything has to be defined before it can be used, which makes reading python a bottom to top kind of thing. Where the most important things tend to gravitate towards the bottom, where the things with less dependencies float towards the top. In this case we have a register and unregister function declared at the bottom. We use bpy.type.INFO_MT_file to add a menu option to the menu for import and export respectively. We define a callback function for when these are selected, which we have menu_func_import and menu_func_export. One thing I definitely want to note is that I was trying super hard to find information in the blender documentation on "menu_func_import" and couldn't find any. And that's because it's just the general name for a callback function and not anything reserved in the blender namespace. I wonder if I could make this cleaner by defining a function in my import and export classes and skip having to call these functions entirely. From there we define an Import and Export class, which to be honest I don't even understand how they work. We define a Blender id name for each class, so Blender must have a way of recognizing each class by it's id name, make an instance and then call execute for everytime import or export are called from the menu. I might have to add an __init__ function to see when an instance is initialized. From there for the import and export class we define and execute function which gets called to parse or export a model. And I've found that if execute is not defined, then Blender will crash. So once execute is defined, we import the other file that we need where our import class and export class are in other files (to keep the source code from getting too messy). Create a new instance of that class, and then call parse. class DashLoader:
# Constructor
def __init__(self): self.block = 0 self.attrs = 1 self.names = 2 self.bones = 3
def readMagic(self): bytes = self.view.read(4) print(bytes) print("is this working?") return str(bytes, "ASCII")
def parse(self, filepath): self.view = open(filepath, 'rb') self.readMagic() self.readHeader() self.readNames()
def readHeader(self): print("readHeader")
def readNames(self): print("readNames")
Right now for the code, I only have a really basic class defined to print out values to make sure the source code is loaded and working. Once we actually get to this point, we get passed a filename and can open it with python's file reading module, so the next step will be to read information from the file and get it into lists of information. And then from there we can look and Blender's file types and see what we need to do to get the data into the Blender editor. Edit: Based on my ramblings here, I was able to simply my init code alot. It now looks like this: # --------------------------------------------------------------------------- # # Blender Addon : Dash Model Format (Import - Export) # MIT License # Copyright 2018 kion @ dashgl.com # --------------------------------------------------------------------------- #
bl_info = { "name": "Dash Model Format", "author": "Kion", "blender" : (2, 79, 0), "location": "File > Import-Export", "description": "Import-Export .dmf meshes", "warning" : "", "wiki_url" : "https://gitlab.com/kion-dgl/DashModelFormat", "category": "Import-Export", }
import bpy from bpy.types import Operator from bpy_extras.io_utils import ExportHelper, ImportHelper from bpy.props import StringProperty, BoolProperty, EnumProperty
# --------------------------------------------------------------------------- # # Register Import and Export Menu Items # --------------------------------------------------------------------------- #
def register(): bpy.utils.register_module(__name__) bpy.types.INFO_MT_file_import.append(ImportDashModelFormat.menu_func) bpy.types.INFO_MT_file_export.append(ExportDashModelFormat.menu_func)
def unregister(): bpy.utils.unregister_module(__name__) bpy.types.INFO_MT_file_import.remove(ImportDashModelFormat.menu_func) bpy.types.INFO_MT_file_export.remove(ExportDashModelFormat.menu_func)
if __name__ == "__main__": register()
# --------------------------------------------------------------------------- # # Import Dash Model Format (.dmf) # --------------------------------------------------------------------------- #
class ImportDashModelFormat(Operator, ImportHelper): bl_idname = "import.dmf" bl_label = "Import Dash Model Format"
filename_ext = ".dmf" filter_glob = StringProperty(default="*.dmf", options={'HIDDEN'}) @staticmethod def menu_func(self, context): self.layout.operator("import.dmf", text="Dash Model Format (.dmf)")
def execute(self, context): from . import import_dmf loader = import_dmf.DashLoader() loader.parse(self.filepath)
print("Hello, World!!!") return {'FINISHED'} # --------------------------------------------------------------------------- # # Export Dash Model Format (.dmf) # --------------------------------------------------------------------------- #
class ExportDashModelFormat(bpy.types.Operator, ExportHelper):
bl_idname = "export.dmf" bl_label = "Export Dash Model Format"
filename_ext = ".dmf"
@staticmethod def menu_func(self, context): default_path = bpy.data.filepath.replace(".blend", ".dmf") opts = self.layout.operator("export.dmf", text="Dash Model Format (.dmf)") opts.filepath = default_path
def execute(self, context): from . import import_dmf exporter = import_dmf.DashExporter() export.parse(self.filepath)
print("Goodbye, World!!!") return {'FINISHED'}
# --------------------------------------------------------------------------- # # End Dash Model Format Import - Export Addon # --------------------------------------------------------------------------- #
Next step is to start working on the functionality for the import class. Edit: Okay after working on the import class a little bit, i have a better idea of why there's not much documentation with respect to importing. Basically in the __init__.py file, you add a menu item and assign a handler for when that menu item is selected. Once a file is selected, then you get a file path and it's mostly just a matter of reading the file from there. Current source: pastebin.com/BuuG0pZFRight now I'm still on reading the file from the disk. With python it turned out to be a lot easier than I expected. You open the file. You seek to an offset. You read a certain number of bytes. And then you use the struct import to define specifically what those byes are. So it's pretty easy to work with at least with respect to reading values in python. So far I have attributes, names, bones and vertices read from the file. Which means I still have textures, materials, face groups and animations. I'm going to go right for face groups, to see if i'm able to at least get a mesh displayed in the view port and then go back and add more from there. Define a material and see if that works. Try reading a texture and see if I can pair it with a material. And then from there try defining and armature and see if I can get that to work. And then finally look into animations. So once I finish reading face groups, the next step is to start looking into the data structures for blender that I need to define to create a mesh. Edit: Okay, so the essential information in the model has been read from the file. The next step is to make a mesh and then add it from the scene. So thanks to the code from the threejs import addon (and not thanks to any kind of documentation), we can find the source for making a mesh and adding it to the scene. me = bpy.data.meshes.new(name) me.from_pydata(vertices, edges, faces) ob = bpy.data.objects.new(name, me) ob.data = me # link the mesh data to the object
scene = bpy.context.scene # get the current scene scene.objects.link(ob) # link the object into the scene
For it looks like first we need to create a new mesh. Then we pass it into a function that auto generates the mesh from a list of vertices and faces. And then we have to create an object, give it a name. Attach the mesh to the object and then add it to the scene. Who needs documentation anyway? Edit: Okay and basic meshes are working. Current source code is posted here: pastebin.com/x0cMZ32WTime to start implementing more details, like bones, textures and materials. Edit: For adding in bones, it looks like I need to use: bpy.data.armatures.new And an example to reference here: github.com/ccxvii/asstools/blob/master/iqe_import_two.pyBecause having documentation would make too much sense, right? Edit: So I've complained about the Blender documentation a few times in jest. But I think I have a pretty good idea of specifically what I have a problem with. And my problem is that all of the documentation revolves around read only values. So blender has a scene. That scene is full of objects and each object can have a type (light, camera, mesh, armature). And to some degree I can understand that blender is first and foremost an editor, so a lot of the addons probably resolve around manipulating data that already exists within a blender scene. What I have a problem with is why there is so little documentation around how to create and specifically how to edit any kind of data type. For example bpy.data.armatures.new, bpy.data.mesh.new are constructors that I found from looking through the source of import addons and references I have yet to find in the blender documentation. What I find worse is that I can't find any information on how to edit anything beyond initialization. Again most of the documentation I have been able to find seems to focus on read only data. So after you've created bpy.data.mesh.new, how do you add any attributes to it? And that's not only the case for importing new meshes into the scene, but how to edit existing objects in the scene if that is the primary focus of the documentation? How do I add uv's to a mesh? How do I add normals to a mesh? How do I add vertex colors to a mesh? So far the only hint i've managed to find is mesh.fromPythonData(vertices, edges, faces), but again that only defines the bare minimum of what is needed to create a mesh and doesn't describe how to set any other attributes beyond that. So then the weird part is, you have to do this weird code scumming, where you look around at other's people's code. Which is a really weird thing to expect people to do. Because there are a lot of 3d formats that are structured very differently, and not a lot of documentation for too many formats. So you're expected to be able to look at one part of someone else's code for handling some proprietary undocument format, and be able to extract enough context out of that source sample to be able to apply to your very own specific code problem. And then that also begs the question of "if there's no documentation, how did these people know to write this syntax in the first place?" So this is incredibly bizarre. I have never experienced any amount of documentation that is this flat out useless. So far I have yet to find anything of any practical use in the documentation. And everything that I have managed to implement if from code scumming other addons. And any and all google searches for bits and pieces of code will generally return results from code snippets on github, and I have yet to find any pages in the blender documentation or otherwise that describe how you are supposed to define properties for any given data type. The one thing I can say conclusively though, is that as opaque as blender is, it's still better than working with collada's dae file type. Edit: Okay, so far managed to get bones added, displayed and in the right hierarchy. Right now I have the bones set to a prefixed value of just z 0 to z 1 to see if they would display at all. Next step is going to figure out how to set a bone from a 4x4 matrix. Figure out if Blender uses row major or column major matrices. And since blender swaps the z and y axis, see if i have to swap the z and y transformation indices in my transformation matrix. Edit: Okay, I looks like I probably got bones working. It looks like the bone head is the tail of the parent bone. And in the case of the root bone, the head is the origin, and the tail is the position part of the transformation matrix (so top three elements on the fourth column of a 4x4 transformation matrix). For each child bone after that, the head is the tail of the previous bone. And the tail of that bone is the sum of the parent tail with the current bone's position information. for joint in self.joints: id = joint[0] pid = joint[1]
bone = armature.edit_bones.new("bone_%03d" % id) bone.head = Vector( (0,0,0) )
if pid != -1: bone.parent = armature.edit_bones[pid] bone.head = bone.parent.tail
tail = Vector( (joint[14], joint[16], joint[15]) ) bone.tail = bone.head + tail
So the next two changes I need to make are: 1) Use dicts in my python code instead of individual arrays for everything, to group more data and make my import addon easier to manage. And 2) Look into using a bmesh for importing meshes, because I am not having any luck finding the syntax sugar to make the bpy.data.meshes.new approach work. You can get a basic mesh (with faces and vertices), but I can't find how to do completely normal things like add uv values. So once I switch over to dicts, and bmesh. The next step will be to see if I can get vertex weights and vertex indices working from there. And then figure out how to link an armature to a mesh. Edit: Okay, I got dictionaries working. Made my code simpler, and doing so gave me some hints on how i could further simplify my file format definition. Next I got bmesh working, but only to the same degree of what I had before faces and vertices. Doing anything beyond that it a shot in the dark. Here's the example link for bmesh: docs.blender.org/api/current/bmesh.html?highlight=bmesh#module-bmeshHere's the api definition for bmesh (describes the properties but no sample code): docs.blender.org/api/current/bmesh.types.html#bmesh.types.BMeshHere's the blender repository and dev community (in case I need to start asking questions) developer.blender.org/diffusion/BAC/And here's a searchable python code base (to use to look for examples from other addons): nullege.com/codes/search/bpy.data.armatures.new
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Jul 22, 2018 10:49:40 GMT -5
Not an intentional bump. My browser was starting to get really laggy editing the previous post, so I'm moving onto a fresh post. Found a hint for implement vertex weights from the mmd file format. def __importVertices(self): self.__importVertexGroup() pmxModel = self.__model mesh = self.__meshObj.data mesh.vertices.add(count=len(self.__model.vertices)) for i, pv in enumerate(pmxModel.vertices): bv = mesh.vertices[i] bv.co = mathutils.Vector(pv.co) * self.TO_BLE_MATRIX * self.__scale bv.normal = pv.normal if isinstance(pv.weight.weights, pmx.BoneWeightSDEF): self.__vertexGroupTable[pv.weight.bones[0]].add(index=[i], weight=pv.weight.weights.weight, type='REPLACE') self.__vertexGroupTable[pv.weight.bones[1]].add(index=[i], weight=1.0-pv.weight.weights.weight, type='REPLACE') elif len(pv.weight.bones) == 1: self.__vertexGroupTable[pv.weight.bones[0]].add(index=[i], weight=1.0, type='REPLACE') elif len(pv.weight.bones) == 2: self.__vertexGroupTable[pv.weight.bones[0]].add(index=[i], weight=pv.weight.weights[0], type='REPLACE') self.__vertexGroupTable[pv.weight.bones[1]].add(index=[i], weight=1.0-pv.weight.weights[0], type='REPLACE') elif len(pv.weight.bones) == 4: self.__vertexGroupTable[pv.weight.bones[0]].add(index=[i], weight=pv.weight.weights[0], type='REPLACE') self.__vertexGroupTable[pv.weight.bones[1]].add(index=[i], weight=pv.weight.weights[1], type='REPLACE') self.__vertexGroupTable[pv.weight.bones[2]].add(index=[i], weight=pv.weight.weights[2], type='REPLACE') self.__vertexGroupTable[pv.weight.bones[3]].add(index=[i], weight=pv.weight.weights[3], type='REPLACE') else: raise Exception('unkown bone weight type.') It looks like each bone has it's own vertex group. And for each bone, you use 'add' to add an index, and then a weight to that specific group. And it looks like the declaration is here: def __importVertexGroup(self): self.__vertexGroupTable = [] for i in self.__model.bones: self.__vertexGroupTable.append(self.__meshObj.vertex_groups.new(name=i.name))
Okay, which is added to the "mesh object". Which is kind of weird, because you declare a mesh, and then you declare and object (to be referenced in the scene) and pass the mesh into that. So the object being referenced here, is the one being added into the scene. I think this code also shows how to link a mesh and an armature, which I was wondering about how to do as well. def __createObjects(self): """ Create main objects and link them to scene. """ pmxModel = self.__model self.__root = bpy.data.objects.new(name=pmxModel.name, object_data=None) self.__targetScene.objects.link(self.__root) mesh = bpy.data.meshes.new(name=pmxModel.name) self.__meshObj = bpy.data.objects.new(name=pmxModel.name+'_mesh', object_data=mesh) arm = bpy.data.armatures.new(name=pmxModel.name) self.__armObj = bpy.data.objects.new(name=pmxModel.name+'_arm', object_data=arm) self.__meshObj.parent = self.__armObj self.__targetScene.objects.link(self.__meshObj) self.__targetScene.objects.link(self.__armObj) self.__armObj.parent = self.__root self.__allObjGroup.objects.link(self.__root) self.__allObjGroup.objects.link(self.__armObj) self.__allObjGroup.objects.link(self.__meshObj) self.__mainObjGroup.objects.link(self.__armObj) self.__mainObjGroup.objects.link(self.__meshObj) Edit: From the mmd python addon, I was able to get more hints. And get the mesh added into the armature as a child. And I was also able to get a list of vertex groups and link them with a bone. So when I open up vertex paint mode, I can see that I do have groups of vertices associated with each bone. The problem is that the group name is just a name, and the groups are a bunch of groups, So i don't think the vertices are actually weighted to a specific bone. More that I have a group of vertices ready to be associated with a bone later, so I need to find the syntax that actually tells blender to link a vertex group with a specific bone for deformation. Edit: Thanks to some help from the blender dev forums, vertex weights are now linked to the armature! Next step is to go back and try to figure out how to calculate the correct bone heads and tails so that the mesh actually rotates in the correct direction. I have a few options to approach this. The first is probably going to be to search around stacked edit and see if I can find a quick and dirty answer there. If that doesn't work, then I still have my collada exporter, and I can export a mesh and then either find or write a small addon that will print out all of the bone properties to reverse engineer that from. And lastly I have the collada import source code, which is in C++ but might give some hints on how the bones are being calculated. Edit: Okay, and blender has a python console. So getting the values from the collada file was easier than expected: for bone in bpy.data.armatures['Armature'].bones: print(bone.head) print(bone.tail) print("...") #~ <Vector (0.0000, 13.4000, 0.0000)> #~ <Vector (0.0000, 15.9500, 0.0000)> #~ ... #~ <Vector (0.0000, 7.3500, 0.0000)> #~ <Vector (0.0000, 9.9000, 0.0000)> #~ ... #~ <Vector (-5.6500, 5.3500, 0.0000)> #~ <Vector (-5.6500, 7.9000, 0.0000)> #~ ... #~ <Vector (5.6500, 5.3500, 0.0000)> #~ <Vector (5.6500, 7.9000, 0.0000)> #~ ... #~ <Vector (-2.5500, -2.5500, 0.0000)> #~ <Vector (-2.5500, 0.0000, 0.0000)> #~ ... #~ <Vector (2.5500, -2.5500, 0.0000)> #~ <Vector (2.5500, 0.0000, 0.0000)> #~ ... #~ for ( var i = 0, j = 0; i < bones.length; i ++ ) {
var bone = bones[ i ];
if ( bone.parent && bone.parent.isBone ) {
boneMatrix.multiplyMatrices( matrixWorldInv, bone.matrixWorld ); vector.setFromMatrixPosition( boneMatrix ); position.setXYZ( j, vector.x, vector.y, vector.z );
boneMatrix.multiplyMatrices( matrixWorldInv, bone.parent.matrixWorld ); vector.setFromMatrixPosition( boneMatrix ); position.setXYZ( j + 1, vector.x, vector.y, vector.z );
j += 2; }
} And here is the list of bones that goes into the collada file: Bone000 ColladaExporter.js:437 (16) [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 13.4, 0, 1] ColladaExporter.js:436 Bone001 ColladaExporter.js:437 (16) [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 9.9, 0, 1] ColladaExporter.js:436 Bone002 ColladaExporter.js:437 (16) [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -5.65, 7.9, 0, 1] ColladaExporter.js:436 Bone003 ColladaExporter.js:437 (16) [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 5.65, 7.9, 0, 1] ColladaExporter.js:436 Bone004 ColladaExporter.js:437 (16) [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -2.5500000000000003, -0, 0, 1] ColladaExporter.js:436 Bone005 ColladaExporter.js:437 (16) [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 2.5500000000000003, -0, 0, 1]
Edit: int ArmatureImporter::create_bone(SkinInfo *skin, COLLADAFW::Node *node, EditBone *parent, int totchild, float parent_mat[4][4], bArmature *arm) { float mat[4][4]; float joint_inv_bind_mat[4][4]; int chain_length = 0;
//Checking if bone is already made. std::vector<COLLADAFW::Node *>::iterator it; it = std::find(finished_joints.begin(), finished_joints.end(), node); if (it != finished_joints.end()) return chain_length;
// JointData* jd = get_joint_data(node); // TODO rename from Node "name" attrs later EditBone *bone = ED_armature_edit_bone_add(arm, (char *)bc_get_joint_name(node)); totbone++;
if (skin && skin->get_joint_inv_bind_matrix(joint_inv_bind_mat, node)) { // get original world-space matrix invert_m4_m4(mat, joint_inv_bind_mat);
// And make local to armature Object *ob_arm = skin->BKE_armature_from_object(); if (ob_arm) { float invmat[4][4]; invert_m4_m4(invmat, ob_arm->obmat); mul_m4_m4m4(mat, invmat, mat); } } // create a bone even if there's no joint data for it (i.e. it has no influence) else { float obmat[4][4]; // bone-space get_node_mat(obmat, node, NULL, NULL);
// get world-space if (parent) { mul_m4_m4m4(mat, parent_mat, obmat); } else { copy_m4_m4(mat, obmat); } }
if (parent) bone->parent = parent;
float loc[3], size[3], rot[3][3]; float angle; float vec[3] = {0.0f, 0.5f, 0.0f}; mat4_to_loc_rot_size(loc, rot, size, mat); //copy_m3_m4(bonemat,mat); mat3_to_vec_roll(rot, vec, &angle);
bone->roll = angle; // set head copy_v3_v3(bone->head, mat[3]);
// set tail, don't set it to head because 0-length bones are not allowed add_v3_v3v3(bone->tail, bone->head, vec);
/* find smallest bone length in armature (used later for leaf bone length) */ if (parent) {
/* guess reasonable leaf bone length */ float length = len_v3v3(parent->head, bone->head); if ((length < leaf_bone_length || totbone == 0) && length > MINIMUM_BONE_LENGTH) { leaf_bone_length = length; } }
COLLADAFW::NodePointerArray& children = node->getChildNodes();
BoneExtended &be = add_bone_extended(bone, node); be.set_leaf_bone(true);
for (unsigned int i = 0; i < children.getCount(); i++) { int cl = create_bone(skin, children[i], bone, children.getCount(), mat, arm); if (cl > chain_length) chain_length = cl; }
bone->length = len_v3v3(bone->head, bone->tail); joint_by_uid[node->getUniqueId()] = node; finished_joints.push_back(node);
be.set_chain_length(chain_length + 1);
return chain_length + 1; }
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Jul 24, 2018 9:23:25 GMT -5
Blender now has bonesSo I guess it payed off that I at least have a partially implemented Collada exporter. Because what I ended up doing is exporting the collada, opening up the python console in Blender, printing all of the positions for the bones, and basically reverse engineering their positions to find out the positions that blender wanted. I still have some more testing to do with bones. For the servbot, all of the bones are a child of the root bone. And the position of the root bone has to be added to the child bone. So in the case with multiple levels, I may potentially have to add up the position of all of the parent bones. One thing I need to take a look at is to see if there is an option on import that will allow me to keep y as up, because switching y and z is pretty dumb. Edit: Okay, now that we have bones placed correctly on at least one model, we can move on to the whole point of making a blender addon in the first place, supporting animations. So like everything else with blender, the documentation has very little to offer in terms of practical information, so we need to look into the source of other plugins, ask questions like crazy, and write everything down so we can scum our way through this. Looking through the source code for another plugin I found the syntax: armature_object.animation_data_create() action = bpy.data.actions.new(name="anim_000") armature_object.animation_data.action = action
So the armature object is the parent for animation. Makes sense I guess since the bones are what's actually moving, and the mesh has a constraint to be deformed by the bones. Then we are given bpy.data.action. So an "action" is an "animation". So like with everything else in blender it makes sense, but it's still off from I would consider normal definitions. And last we tell the armature to use our action. One thing that is left unanswered in this sytax is how to add multiple animations, but if we can even got the model to move at all (horrible convulsions or otherwise), I'll take that as a win to get started with. So places to start: 1. Is the documentation. I've read over it a few times already and it doesn't too much sense. We define an action, and action is a list of fcurves. And fcurves can be converted into key frames. What's an fcurve and how do we define it? Well it wouldn't be the blender documentation if it actually answered that question. 2. Look into the source of other files. Nullege gives a list of source files with bpy.data.actions.new here: nullege.com/codes/search/bpy.data.actions.new. Which seems to generally point to the mmd file format. 3. Look into the blender repository. Specifically there is a project called "import_bvh", which contains some source code examples. arm_ob.animation_data_create() action = bpy.data.actions.new(name=bvh_name) arm_ob.animation_data.action = action
So here it starts out with creating and action and setting that action to the armature object. num_frame = 0 for bvh_node in bvh_nodes_list: bone_name = bvh_node.temp # may not be the same name as the bvh_node, could have been shortened. pose_bone = pose_bones[bone_name] rest_bone = arm_data.bones[bone_name] bone_rest_matrix = rest_bone.matrix_local.to_3x3()
bone_rest_matrix_inv = Matrix(bone_rest_matrix) bone_rest_matrix_inv.invert()
bone_rest_matrix_inv.resize_4x4() bone_rest_matrix.resize_4x4() bvh_node.temp = (pose_bone, bone, bone_rest_matrix, bone_rest_matrix_inv)
if 0 == num_frame: num_frame = len(bvh_node.anim_data)
Here it looks like the code is doing two things. One is that it is creating a pose, bone, rest and inverted rest matrix. And it's also calculating the number of frames. In my case the number of frames is in terms of length in seconds, so I can multiply that number by frames per second to get the integer number back. skip_frame = 1 if num_frame > skip_frame: num_frame = num_frame - skip_frame
# Create a shared time axis for all animation curves. time = [float(frame_start)] * num_frame if use_fps_scale: dt = scene.render.fps * bvh_frame_time for frame_i in range(1, num_frame): time[frame_i] += float(frame_i) * dt else: for frame_i in range(1, num_frame): time[frame_i] += float(frame_i)
Here is looks like the code is doing something with the time scale and skipping over the first few frames. Not sure if this is needed, I'll ignore it and if everything goes totally wrong I can refer back to it later. for i, bvh_node in enumerate(bvh_nodes_list): pose_bone, bone, bone_rest_matrix, bone_rest_matrix_inv = bvh_node.temp
This is starting to look a little more familiar. So a little bit back, we made a bunch of matices for each bone, and next we loop over all of the bones. if bvh_node.has_loc: # Not sure if there is a way to query this or access it in the # PoseBone structure. data_path = 'pose.bones["%s"].location' % pose_bone.name
location = [(0.0, 0.0, 0.0)] * num_frame for frame_i in range(num_frame): bvh_loc = bvh_node.anim_data[frame_i + skip_frame][:3]
bone_translate_matrix = Matrix.Translation( Vector(bvh_loc) - bvh_node.rest_head_local) location[frame_i] = (bone_rest_matrix_inv * bone_translate_matrix).to_translation()
# For each location x, y, z. for axis_i in range(3): curve = action.fcurves.new(data_path=data_path, index=axis_i) keyframe_points = curve.keyframe_points keyframe_points.add(num_frame)
for frame_i in range(num_frame): keyframe_points[frame_i].co = \ (time[frame_i], location[frame_i][axis_i])
So if the bone has translation data, then or each key frame we need to add a value. So we have the value data path which gives the name of the keyframes to be defined. So these are the position key frames for bone 000. The data for each key frame should either be a vec3, or a matrix (with the vec3 as a position). Now it looks like we might need to do something in there to get the position relative to the position of the bone. And then once that's done for every keyframe we define a curve and then add the data into it. if bvh_node.has_rot: data_path = None rotate = None
if 'QUATERNION' == rotate_mode: rotate = [(1.0, 0.0, 0.0, 0.0)] * num_frame data_path = ('pose.bones["%s"].rotation_quaternion' % pose_bone.name) else: rotate = [(0.0, 0.0, 0.0)] * num_frame data_path = ('pose.bones["%s"].rotation_euler' % pose_bone.name)
prev_euler = Euler((0.0, 0.0, 0.0)) for frame_i in range(num_frame): bvh_rot = bvh_node.anim_data[frame_i + skip_frame][3:]
# apply rotation order and convert to XYZ # note that the rot_order_str is reversed. euler = Euler(bvh_rot, bvh_node.rot_order_str[::-1]) bone_rotation_matrix = euler.to_matrix().to_4x4() bone_rotation_matrix = (bone_rest_matrix_inv * bone_rotation_matrix * bone_rest_matrix)
if 4 == len(rotate[frame_i]): rotate[frame_i] = bone_rotation_matrix.to_quaternion() else: rotate[frame_i] = bone_rotation_matrix.to_euler( pose_bone.rotation_mode, prev_euler) prev_euler = rotate[frame_i]
# For each Euler angle x, y, z (or Quaternion w, x, y, z). for axis_i in range(len(rotate[0])): curve = action.fcurves.new(data_path=data_path, index=axis_i) keyframe_points = curve.keyframe_points curve.keyframe_points.add(num_frame)
for frame_i in range(0, num_frame): keyframe_points[frame_i].co = \ (time[frame_i], rotate[frame_i][axis_i])
Likewise the same is done for rotation. The reason this is long is because it looks like it handles both euler and quaternions. Since everything is already defined as a quaternion, we can just look into using that data. And more specifically, since everything outside of the position for the root bone is defined with rotation data, this is probably where we're going to be defining most of the motion. So looking over this source has given us some hint on where to get started. After making an action, we define some scurves, and it doesn't matter if the animation is correct or not, if we can throw values in there and get something to move, then we can start trying to narrow down how things are actually suppose to work, once we're able to observe something moving. Edit: I'm pretty glad I was able to get the key frames to react in some way. And I at least got position working, which I wonder if I can cheat. For MML, only the root bone needs position animation and the rest of the bones are done with rotation. So first I need to test to see if I can skip over all of the bones in favor of only animating the root bone. And then I need to look into rotation to see if I can get anything done related to that. Edit: Okay, we now have a running animation for the servbot!!!! To be lazy I went ahead and only added in the Y direction position key frames for the root bone. Which saves me the trouble of having to figure out what blender actually wants me to do for the relative bone position. And since the animations in MML only have position key frames for the root bone, this is a pretty easy corner to cut. As for the rotational key frames, that turned out to be pretty lucky that all I really needed to do was to change the order of the rotation quaternion and then feed it into the fcurve data type that blender provides. So the next thing t figure out is how to add material, and then how to add texture and uv's to get these models looking like something. Good news is that this functionality exists in the gltf blender importer, so I think that would be a good place to start. Edit: Okay, we have material (set to red to make sure it works). Next I need to figure out how to load a png file as a texture from binary data, and then I need to figure out how to set face uv values. Edit: Managed to load textures and attach them to a material. Though rather than being able to load textures directly from the binary included in the file, I ended up having to write a temporary image file, and then read from that image file, which is kind of redundant and lame, but at least it works. Next step is to apply uv to the faces of the model so the texture actually is displayed on the model. The threejs plugin has pretty readable code for that, so I'll post it here as a reference. if face_data["hasVertexUVs"]:
print("setting vertex uvs") for li, layer in enumerate(vertexUVs):
me.uv_textures.new("uv_layer_%d" % li)
for fi in range(len(faces)):
if layer[fi]: uv_face = me.uv_textures[li].data[fi] face_uvs = uv_face.uv1, uv_face.uv2, uv_face.uv3, uv_face.uv4 for vi in range(len(layer[fi])): u = layer[fi][vi][0] v = layer[fi][vi][1] face_uvs[vi].x = u face_uvs[vi].y = 1.0 - v active_texture = materials[faceMaterials[fi]].active_texture if active_texture: uv_face.use_image = True uv_face.image = active_texture.image
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Aug 3, 2018 8:02:18 GMT -5
Animations working in BlenderThis ended up being longer (and louder) than expected. It's a 12 minute video going over how to export models from web and import them into Blender with animations. The web page is format.dashgl.com/legends.html, which is the same as my MML2 page with a few changes. 1. The y-position on animations has been fixed. 2. Previews for images has been removed 3. Textures are applied to the mesh individually, and not grouped 4. Exports are set to .dmf files only .dmf (Dash Model Format) is a file format that I ended up making. Basically any given 3d file is just a list of vertices, faces, bones, and skins weights that describe to a program how these elements should be arranged. The advantage of using a pre-existing format is that several programs already know how to interpret these files. But if the file format is hard to work with, and there's no guarantee that programs are going to interpret that file directly, then I might as well make my own format, if I can write plugins to tell programs how to interpret that data correctly. So the file specification for the "Dash Model Format" is listed over at: format.dashgl.com/. The file format itself, the documentation, threejs importer/exporter, and blender plugins are all available under the MIT license here: gitlab.com/kion-dgl/DashModelFormat. Which is still under version 1. After making it and getting some experience with Blender I think there are still changes and improvements to be made, but as a proof of concept it works pretty well. So with Blender at least theoretically working, it means that you can export to .dmf, import into Blender, export .dae from Blender, and change the up direction of the .dae file from Z to Y. And then import that file into an editor and have animations. Though effectively what this all kind of means is that I can use this process to produce example .dae file. Which means I could use that as a basis for exporting directly to .dae from my file viewer, but in general .dae is too tedious of a file format to be work working with. And it's more enjoyable to be working with the plugins for various 3d applications to make sure the data is being interpreted correctly. As for why I don't use blender or noeisis to interpret models directly to be exported. It's because using Javascript provides a lot of freedom. Which games there's a lot of references for which textures are mapped to which meshes, and which animations. Not to mention that more than one mesh can be included in any given file. Threejs and IndexedDB make this really easy to write functions to map these relationships, to get a model grouped into a mesh, with textures and animations applied. And then export that as a single file where everything is included. That way there's only one specific model file that Blender (or which ever program needs to read), and then I can export to that. So for files I have a decent proof of concept working. Which means the next step is to try and figure out the 0x40 flag which I've been ignoring up until now, and in Blender it's super noticeable. So what the 0x40 flag is that the hierarchy there is a list of polygons, with the bone and bone parent id for the influence of each polygon. On the end is a flag, and 0x80 seems to be the flag for hiding a polyon, and 0x40 seems to be an indication of vertices overlapping. Which I'm not entirely sure what that means. Positially it could mean use the child bone to transform the position of thevertice, but then use the weight of the parent bone to deform it. Or it could just mean that the weight is split between two sets of bones. There's also the issue of which set of vertices in a given group is even subject to these rule or not. So I don't think I need to do any testing in game where I remove these flags, and I can probably already observe this behavior in the previewer. Which means that testing different approaches in the editor, and looking for examples with a low polygon count ha share this flag definition.
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Aug 3, 2018 13:31:28 GMT -5
Share Vertice FlagThe next aspect to clean up with respect to Megaman Legends 2 seems to be the shared vertice flag. In MML2, models are made up of lists of polygons. And to describe which polygons belong to which bones, the model has a heirarchy list which has the format of: [Polygon Number] [Parent Bone Number] [Child Bone Number] [Polygon Flags] In this case we're mostly interested in the flags. So far the only flag I know of is 0x80 which seems to be the "don't draw" flag. For items that are in the character's hands like wrenches and stuff, it looks like they are pre-included in the model and attached to the hand as a child, with the command to have them not drawn by default. And if the flag switches, the wrench is then drawn in their hand. I think this is also done for different hands and gestures, a hand can be swapped out by telling one to draw, and the other one to not, and then switching the flags to create the effect. There's another flag that I've been ignoring up until now, which is the 0x40 flag. And for right now at least, I'm referring to it as the share flag. And it's easier to show than to describe, so one example for it, is Miss Tron on the Sulfur Bottom. We can see that the vertices on her elbows don't match up, creating space in between them. So if we turn off all of the polygons except for the ones with share flags, we can get a better look. So the model is in the file ST0202.DAT, and the model flag is 0x2820. There are only four polygons with the shared flag attached which makes it easier to approach that most of the other candidates as, this flag isn't used on reaverbots. It mostly seems to be used on characters, and specifically for joints such as elbows and knees. Another thing to note is the pattern. The elbow coming up from bottom makes a triangle, and the parent bone also makes a triangle in the other direction. So it looks like these two polygons are intended to fit together in some specific way. The question is in what specific way, and how do I know to look for that. So some possibilities are the the position of the polygon before being transformed is the same, and that's what to look for, or the position of the polygon after being transformed by the bone is the same, and that's what to look for. Otherwise if the parent and child both have the flag, then it could also be that the first four vertices on the child bone are to be attached to the last four vertices of the parent bone. The next thing is specifically how to deal with this. Is is that the vertices simply attach to create the new mesh? So the vertices of the previous polygon are included in the draw order of the next polygon to create one continuous mesh, or is it that there are two polygons and the vertices are weighted a certain amount to the parent bone. The first of these two options seems like the more likely one. Though then the issue is how to look up the indices of the parent bone to create the faces to create one continuous mesh. So it seems like the place to start is collect data. I can export the binary of the Miss Tron model and start looking at it in excel. And I can make a list of vertices for the top of the arm, bottom of the arm and the face order of these two polygons. Edit: Okay, so we're dealing with a pair of floating arms. Time to start breaking down the information written in the model to see what we can piece out about how these are supposed to fit together. We have four primitives numbered 2, 3, then 5, 6. So we probably have the right and left arm. Which means we can break this down and just look at 2 and 3. Which gives us just the right arm. Primitive id 2 has 16 verts and 13 quads, and primitive id 3 has 10 verts and six quads. So what it looks like is the child primitive needs to use the vertex indices of the parent bone to make a continuous mesh (rather than being independent rectangles). That could mean a lot of different things. a) the values of the vertices before being transformed by the bones are the same b) the values of the vertives after being transformed by the bones are the same c) the face indices need to be interpreted in a way that references the parent bone's vertices d) there is some other information in the mesh file that needs to be parsed out I since this list goes from simple to more complex, I guess I can just start with the first option and then start narrowing down from there by elimination. Edit: {id: 2, nb_tri: 0, nb_quad: 13, nb_vert: 16, scale: 1, …} Vertex 946 14 30 0 Vertex 946 14 986 0 Vertex 938 1020 31 0 Vertex 938 1021 985 0 Vertex 1017 46 30 0 Vertex 1014 46 986 0 Vertex 8 983 34 0 Vertex 4 983 982 0 Vertex 953 999 984 0 Vertex 953 999 33 0 Vertex 23 5 34 0 Vertex 19 5 982 0 Vertex 958 221 28 0 Vertex 958 155 987 0 Vertex 1007 221 28 0 Vertex 1007 155 987 0
{id: 3, nb_tri: 2, nb_quad: 6, nb_vert: 10, scale: 1, …} Vertex 29 184 993 0 Vertex 29 184 37 0 Vertex 1002 2 33 0 Vertex 1002 960 992 0 Vertex 26 2 33 0 Vertex 26 960 992 0 Vertex 993 184 37 0 Vertex 993 184 993 0 Vertex 1002 19 33 0 Vertex 26 19 33 0 Option a: the vertices are the same before being transformed doesn't seem to be the right approach, time to move on. Edit: {id: 2, nb_tri: 0, nb_quad: 13, nb_vert: 16, scale: 1, …} Vertex -3.90 -0.70 1.50 Vertex -3.90 -0.70 -1.90 Vertex -4.30 0.20 1.55 Vertex -4.30 0.15 -1.95 Vertex -0.35 -2.30 1.50 Vertex -0.50 -2.30 -1.90 Vertex 0.40 2.05 1.70 Vertex 0.20 2.05 -2.10 Vertex -3.55 1.25 -2.00 Vertex -3.55 1.25 1.65 Vertex 1.15 -0.25 1.70 Vertex 0.95 -0.25 -2.10 Vertex -3.30 -11.05 1.40 Vertex -3.30 -7.75 -1.85 Vertex -0.85 -11.05 1.40 Vertex -0.85 -7.75 -1.85
{id: 3, nb_tri: 2, nb_quad: 6, nb_vert: 10, scale: 1, …} Vertex 1.45 -9.20 -1.55 Vertex 1.45 -9.20 1.85 Vertex -1.10 -0.10 1.65 Vertex -1.10 3.20 -1.60 Vertex 1.30 -0.10 1.65 Vertex 1.30 3.20 -1.60 Vertex -1.55 -9.20 1.85 Vertex -1.55 -9.20 -1.55 Vertex -1.10 -0.95 1.65 Vertex 1.30 -0.95 1.65 Option b: the vertices are the same after being transformed by the bones doesn't seem to be the case either. Onto the face definition. Since we're looking at the intersection between bones I think that means that the child bone needs to have a share flag, and so does the parent bone. So in this case we have primitive id 2, where the parent doesn't have this flag. So there's no connection at the elbow. But the child has the flag, so that means we need to look in between these models. So next step is to take a look at the face definition of primitive 3. Edit: Progress maybe? Though I'm actually extremely confused by this result. I used points in three js to display all of the vertices in the mesh to look for overlaps. And it looks like there are indeed places where the vertices overlap from one bone to the next, specifically on arms and legs. So I wrote my parser to check for the share flag (0x40), and when the share flag exists to go back over the previously declared polygon and look for places where the vertices are close. I think it"s a problem with float precision, or the way I'm scaling down the meshes, but previous vertices don't match up exactly with the newly declared ones. But if you check for plus or minus 0.5 within the range of a newly declared vertex, you can find the matching vertex on the parent. The problem is that this doesn't deform the mesh correctly when an animation is played. Specifically for Roll during the running animation her legs get super skinny because the top of the knee is bound to the parent bone, which doesn't preserve the rectangular shape of the leg when moved. This could indicate that this might be some kind of vertex weight problem and not an face assignment problem. The best way to test this would be to go to the field outside the launchpad after the game and record some footage of Roll running around. Then save a save state, find the area of memory where Roll's hierarchy is defined, remove any 0x40 flags, and then record some more footage and then see how ROll's mesh deformation changes. That would give me some hints as to specifically what is being changed with these flags and give hints on how to replicate the effect. But before I do that, I can try to try a blind vertex weight assignment test to see how that works out.
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Aug 22, 2018 12:27:07 GMT -5
Shared VerticesOriginally I was going to look at exporting maps from MML1, before getting caught up with MML2. After making a file format, making an importer for blender, and spending a lot of time on small details on MML2. In terms of time management I want to wrap up MML2 and get back to the assets I'm missing in MML1. So I'm going to write out everything I know about the shared vertices flag in MML2, and then get back into MML1. So the picture above shows the issue. Most reaverbots and simple enemies use simple polygons to represent the model. Bit for a lot of human characters, the gaps between polygons leave places where nothing is drawn because of culling. How ever if you look in game, you don't actually see this effect. So why does this problem not happen in game? Well the simple answer is because there is a flag on the polygons to "share" vertices between two polygons to avoid the gap. The problem is how to replicate this effect out of game. There's probably a really simple solution for this, the difficult part is going through the trial and error to find the simple solutions. So the only thing I can think of doing is to write down everything I know about the Roll model to try and find some hints for how to approach this issue, or at the very least write everything down to have some background information for when I next encounter it. The hierarchy definition was never something I was able to figure out when I was working on MML1, but when I rewrote my tools for MML2, the concept seemed a lot easier because I had encountered it before. So hopefully this is the same thing. Though Satoh has been hinting that he found an approach in his Noesis plugin, so if he's willing to share the code on that, I can try to see if I can gleem anything from his solutions barring any progress via the documentation approach before moving back into MML1. So let's start the information dump for Roll's model. For the purposes of disambiguation I'm going to define some terms that are often thrown around when talking about 3d models (interchangeably). In order to avoid confusion I try to use specific terms in a specific way, which I'm not sure is standard practice, or if people use these for general terms. So to start simple a "point" is an x,yz, location in 3d space. A "line" is a segment that goes between two 'points' (often called edges). A "triangle" is three connected points, which results in a "face". A "quad" is a retarded term for two connected 'triangles' defined at the same time. Quads are stupid, nobody should ever use quads. A "strip" is a consecutive list of defined triangles with a common material. A "polygon" is a strip or series of strips that make up a specific part of a model ('arm', 'leg', ect). A "mesh" describes the total connected geometry of a 'model'. And a "model" is a general term the combined result of the mesh, with bones and animations (if present). So basically I'm not sure if my definition for 'polygon' matches up with general practice. Or if there's a better term for that because people often use the terms for 'strip', 'polygon' and 'mesh' interchangeably. So specifically with Megaman Legends, each part of the mesh is split into separate parts, body, head, upper arm, lower arm, hand, ect. each one of these I basically refer to as a 'polygon'. Bone 0 : { x: 0, y: 43.25, z: -0.55 }
Bone 1 : { x: 0, y: 13.8, z: 0.85 }
Bone 2 : { x: -5, y: 8.2, z: 0.5 }
Bone 3 : { x: -0.55, y: -8, z: 0.4 }
Bone 4 : { x: 5, y: 8.2, z: 0.5 }
Bone 5 : { x: 0.55, y: -8, z: 0.4 }
Bone 6 : { x: 0, y: -0, z: 0 }
Bone 7 : { x: -2.65, y: -3.35, z: 0.4 }
Bone 8 : { x: 0.05, y: -15.6, z: 0 }
Bone 9 : { x: 2.65, y: -3.35, z: 0.4 }
Bone a : { x: -0.05, y: -15.6, z: 0 } So for completeness, above is the list of bones for Roll. There are 10, and the y is flipped and because the PSX's use of shorts and not floats, it makes the numbers stupidly big, so I scale everything to 1/20th it's original size for use with things like blender. So the next thing we need to look at is the "hierarchy", which defines the bone and bone parent for each one of the polygons and defines flags. The format is polygon number, child bone, parent bone, flags. In this case specifically it's the flags part that we're interested in. But again for completeness we'll write everything out. Polygon No: 0 Parent Bone: 11 Child Bone: 0 Polygon Flags: 0x0
Polygon No: 1 Parent Bone: 0 Child Bone: 1 Polygon Flags: 0x0
Polygon No: 2 Parent Bone: 0 Child Bone: 2 Polygon Flags: 0x0
Polygon No: 3 Parent Bone: 2 Child Bone: 3 Polygon Flags: 0x0
Polygon No: 4 Parent Bone: 0 Child Bone: 4 Polygon Flags: 0x0
Polygon No: 5 Parent Bone: 4 Child Bone: 5 Polygon Flags: 0x0
Polygon No: 6 Parent Bone: 0 Child Bone: 6 Polygon Flags: 0x40
Polygon No: 7 Parent Bone: 6 Child Bone: 7 Polygon Flags: 0x40
Polygon No: 8 Parent Bone: 7 Child Bone: 8 Polygon Flags: 0x40
Polygon No: 9 Parent Bone: 6 Child Bone: 9 Polygon Flags: 0x40
Polygon No: 10 Parent Bone: 9 Child Bone: 10 Polygon Flags: 0x40
Polygon No: 11 Parent Bone: 2 Child Bone: 3 Polygon Flags: 0x80
Polygon No: 12 Parent Bone: 2 Child Bone: 3 Polygon Flags: 0x80 So we have polygons numbered 0 - 12. And generally the way it works out is that starting with polygon 0, up through the number of bones will match the child bone number 1:1 and define the basic model. After that, is generally extra things like wrenches, or hands to be swapped out or an item the character is holding. And if they have a 0x80 flag, that means the polygon isn't supposed to be drawn. So in this case polygons 11 is probably a wrench and polygon 12 is a piece of paper that Roll holds in her hand, and neither is really important. Turned out to be a screw driver, but just to show they're not really important. Which means we need to track down what each of the other polygons is. So here we have a numbered break down of each of the polygons. It looks like the share flag is also active on Roll's shorts, but the effect isn't very easy to observe, so we'll focus on her knees because it's a lot easier to work with. So the next question becomes approach, how do we replicate the effect from the game given the information we have to work with. Which I would say is easier said than done, but I think the solution is simple, the hard part is finding the simple solution. So this is a point outline of Roll's leg. Honestly it's the same vertices, so I have no idea why it's mirrored, but you can see in the knee section that the dots overlap in a specific area. My first impression was that I just needed to map the faces of the child polygon, to the parent. But that doesn't work as the model doesn't deform correctly. Same with linking parent to child. Same with linking parent to child and then splitting the vertex weight. So there likely is a simple solution, but the obvious solutions don't seem to work. And I'm really going to hate myself if it's something so stupidly obvious that I didn't realize. The issue is that we need to get these meshes to share specific vertices, but at the same time we need these polygons to remain rigid with respect to their bone deformations so they don't stretch and flatten during animations. Edit: So vertices of a previously defined polygon are close but generally not exact. So we can add a little bit of wiggle room to find where the vertices overlap. for (let k = this.previous_ofs; k < this.geometry.vertices.length; k++) {
let prev = this.geometry.vertices[k];
if (prev.x > vertex.x + 0.5 || prev.x < vertex.x - 0.5) { continue; }
if (prev.y > vertex.y + 0.5 || prev.y < vertex.y - 0.5) { continue; }
if (prev.z > vertex.z + 0.5 || prev.z < vertex.z - 0.5) { continue; }
... } So the problem is that the way the bones are set up, they need to remain square with respect to their local coordinates to look coorect. If you just map the parent polygons to the child vertices, or likewise the child polygon to the parent vertices, you end with this for the actual animation. Now my thinking was that maybe to get around this issue that it might be a good idea to split the weight of the shared vertices between the two bones respectively and maybe that would solve the issue. It makes the effect less obvious, but it's still definitely doesn't look right at all when deformed by animations. And then even trying other dirty tricks, like just joining the leg at the knees and then allowing the other vertices to remain unchanged to keep the form of the leg, causes the back part of the leg to stick through the knee when being used in animations. And even if you try to do things like set the vertex weight of the parent shared vertices to 0.8, 0.2 and same with the child bones, you still end up with a lot of problems like gaps and piercing. So after trying several combinations, I tried to open the game and see if there was anything that stood out about the bones that would indicate which approach the dev team took, there really isn't anything that stands out. And you can take a save state and then edit the hierarchy to remove all of the share flags. And the good news is you get the exact result i have in the editor. But then, when I put the flags back to try to see what's going on, it just doesn't make any sense. Since they fit together, but they fit together in a way that doesn't break like crazy, so I really am back at a point where I have no idea what's going on. Edit: So I think I'm going to leave MML2 as is for now and go back to the assets that I've been missing in MML1 and hopefully clean up my code from what I've learned from working with MML2, specifically with respect to their hierarchy format. As for the flags, Satoh was working on a Noesis Addon for MML2 PC models, so I'll leave his code below if anyone wants to read through for his notes on these flags: pastebin.com/u82cnYn5
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Jul 16, 2020 2:41:40 GMT -5
Two years later and Roll's knees have finally been fixed!!! The issue ended up being pretty simple. In retrospect, it was pretty simple. But then again in retrospect it always is. The image above shows the difference between MML1 and MML2. In MML1 body parts were made of independent polygons, with each polygon weighted to a single mesh. This creates a block-y looking appearance, and you get limbs (such as knees and elbows) passing through each other when the character bends. In MML1 you can see with a lot of the character designs the characters have flat colors for shirts or parts to mask this. In MML1 (and this is common in other early 90's games) you have a list of vertices and then faces are declared relative to that list of vertices. For example a flat plane with four vertices would have face indices numbered 0 - 3 and only reference that local vertex list. For the next polygon it would be the same thing declare a list of vertices and have a list of faces with indices that only reference that one list. In the early 2000's i guess designers and programmers managed to stop making these block-y models by blurring the lines between weighted polygons by declaring a global list of vertices. So in the previous example if you have two planes each with 8 vertices, face indices will be labeled 0 - 7 to be able to access any vertex that has been declared and that you can have one continuous mesh. In the case of MML2 I'm guess it's in a between stage as they share vertices between different polygons, but the implementation is really clumsy. The faces are all declared relative to the list. So like in the first example each polygon declares its own list of vertices and then each polygon has a list of faces that only reference that short list of vertices, and all you have is a 0x40 flag that hints that you should be doing something. The implementation i made is that i created a local look up for all of the vertices. So we have 4 vertices declared, then I add them to a list of where the indices are referenced globally and then use the global reference to make the face. When a 0x40 flag is declared I go back through all of the existing vertices and if there is a vertex that exists that is super close to the new vertex being defined, I don't add the new vertex to the stack and instead set the index reference to the position of the previous vertex. And that way we get one continuous mesh instead of a broken knees and a block-y mess. Edit: I'm going to stash this here: (need it for debugging later) pastebin.com/VAW8zie3
|
|
|
Post by ombrard on Jul 18, 2020 14:42:22 GMT -5
Here is some info of the texture compression of MML2:
there are 2 words used in the header
the size of data and data position
the blocks is divided in lz header then compressed data
size is in the word at 0x4
and the word at 0x10 tell us the start of compressed data
struct CompressedState { uint32_t maybeErrorCode; uint16_t *out; uint32_t *bitmap; uint32_t maybePadding; uint32_t windowOffset; uint32_t currentBit; uint32_t bitBucket; uint32_t literalsCount; };
uint32_t decompress(CompressedState *state, uint16_t *src, uint32_t chunkSize) { uint16_t in; uint32_t *bitmap; uint32_t windowOffset; uint32_t literalsCount; unsigned currentBit; uint16_t *out; unsigned bitBucket; unsigned bit; uint8_t *subSrc; int subLen;
bitmap = state->bitmap; windowOffset = state->windowOffset; literalsCount = state->literalsCount; currentBit = state->currentBit; out = state->out; bitBucket = state->bitBucket; bit = bitBucket & currentBit; while (chunkSize != 0) { in = *src++; if (bit == 0) { *out++ = in; } else { if (in == 0xffff) { windowOffset += 0x2000; if (--literalsCount == 0) { return state->maybeErrorCode; } } else { subSrc = windowOffset + (in >> 3); subLen = (in & 7) + 2; while (subLen-- != 0) { uint8_t *subOut = (uint8_t *)out++; *subOut++ = *subSrc++; *subOut++ = *subSrc++; }; } } currentBit >>= 1; chunkSize--; if (currentBit == 0) { bitBucket = *bitmap++; currentBit = 0x80000000; } bit = bitBucket & currentBit; }; state->bitmap = bitmap; state->literalsCount = literalsCount; state->currentBit = currentBit; state->out = out; state->bitBucket = bitBucket; state->windowOffset = windowOffset; return 0; } Thanks to denim for some info and asm code and Nicolas Noble (pixel) for the proper C code!
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Jul 19, 2020 4:25:27 GMT -5
Nice, hopefully this means we can finally unpack the TIM textures directly from the game data. This isn't going to be plug and play as far as just running the code above and then having the compressed result, I need to go back and look at the header of the TIM files to make sure I have the right selection for the compressed part of the texture to pass into the function above so that we can decompress it and then render it to a canvas. The file I'm going to be using as an example is st0ft.bin. It looks like there is another file at the start of the file, and then the list of textures starts after that. So we'll try to make a list of texture headers to try and guess what the struct type is, and then separate out the compressed data from there. It should be all of the information after the palette. So we basically need to find the number of palettes, the number of color, width and height of the texture. For now we'll go in order. 00001b000: 03000000 20800000 0a000000 0000f900 .... ........... 00001b010: 10000100 80010001 40000001 00000000 ........@....... 00001b020: 00000000 b0040000 00000000 00000000 ................ Let's see how my brain handles the stress of simple math. The second dword looks like the full size of the image: 0x8020. The 0x20 looks like a conspicuous size for a color palette. Which means we're probably looking at a 16 color image with a single palette. Which means for the image, if we have 256 x 128 that's 0x100 * 0x80 which would give us 0x8000 bytes. Except in this case we have an image that's 4 bits per pixel. And ouch, that's enough math for my brain at the moment. For now we'll mark the start and stop of this file to work with it later. Start of palette: 0x001b030 Start of data : 0x001b050 And since we don't know exactly where it ends, since it's entirely possible for a file to trail off to zero, we'll try to mark when the file stops having positive bytes. Which is around 0x001ffd0. 000020000: 02000000 20000000 01000000 1000f900 .... ........... 000020010: 10000100 00000000 00000000 00000000 ................ 000020020: 00000000 00000000 00000000 00000000 ................ 000020030: a498c5a0 e6a848b9 abcdccd1 edd90ee2 ......H......... 000020040: 2fe650ee 71f6b3fe ffefae99 53ae2891 /.P.q.......S.(.
Following that it looks like we have an isolated palette. 0020800: 02000000 20000000 01000000 2000f900 .... ....... ... 000020810: 10000100 00000000 00000000 00000000 ................ 000020820: 00000000 00000000 00000000 00000000 ................ 000020830: aabdabbd ccc1edc5 0eca2fce 50d271d6 ........../.P.q. 000020840: 92d6b3da d4def5e2 16e737eb 58ef58ef ..........7.X.X. And another one. 000021000: 02000000 20000000 01000000 3000f900 .... .......0... 000021010: 10000100 00000000 00000000 00000000 ................ 000021020: 00000000 00000000 00000000 00000000 ................ 000021030: 4ece4ece 6fd28fd6 90dab1de d1def2e2 N.N.o........... 000021040: f2e613eb 34ef54ef 55f375f7 96fbb7ff ....4.T.U.u..... Aaaaaand another. 000021800: 02000000 20000000 01000000 4000f900 .... .......@... 000021810: 10000100 00000000 00000000 00000000 ................ 000021820: 00000000 00000000 00000000 00000000 ................ 000021830: 32aa32aa 53ae74b2 95b6b6ba b7bed8c2 2.2.S.t......... 000021840: f8c219c7 3acb3bcf 5cd37dd7 9edb9edb ....:.;.\.}..... And another. 000022000: 03000000 20800000 07000000 2000f801 .... ....... ... 000022010: 10000100 c0010001 40000001 00000000 ........@....... 000022020: 00000000 e0020000 00000000 00000000 ................ We finally come to another TIM file. Very similar to the first similar to the first with a single 16 color palette. Start of palette: 0x0022030 Start of Data: 0x0022050 End of Data: 0x0025100 000025800: 02000000 20000000 01000000 0000f801 .... ........... 000025810: 10000100 00000000 00000000 00000000 ................ 000025820: 00000000 00000000 00000000 00000000 ................ 000025830: acadcdb1 0fba30c2 72c6b4ce d5d6f6da ......0.r....... 000025840: 17df59e7 8bddd1fb aba5cca9 0dae4fb6 ..Y...........O. And then another palette. 000026000: 03000000 40800000 05000000 0000f201 ....@........... 000026010: 20000100 00020001 40000001 00000000 .......@....... 000026020: 00000000 40020200 00000000 00000000 ....@........... And then I think if we have three TIM images to work from, then we should have some data points to work from. Looks like we have 1 palette with 0x20 colors. And then a similar length. For now we'll start with these data points and compare with the headers on the PC version to start breaking them down. Edit 1:Offset | 0x00 | 0x02 | 0x04 | 0x06 | 0x08 | 0x0a | 0x0c | 0x0e | 0x0000 | 0x03 (TIM enum) | 0x00 | Size of Decompressed Data | 0x00 | Num (function unknown) | 0x00 | Palette Framebuffer X | Palette Framebuffer Y | 0x0010 | Palette Color Count | Number of Palettes | Image Framebuffer X | Image Width | Image Framebuffer Y | Image Height | 0x00 | 0x00 | 0x0020 | 0x00 | 0x00 | Dictionary Size | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 |
Made some progress thanks to discussing the TIM files with OmbraRD on Discord last night. Writing up some documentation so it doesn't slip into the black hole of the chat log. The table above shows what I think the structure of the TIM header is. At 0x00 the first word is the Enum marker for TIM image files which is 0x03. As a side note the Enum for TIM palette files is 0x02. The word at 0x04 is the size of the decompressed image + palette. The palette data is included in the compressed image payload. For example if the full decompressed size is 0x8020, this indicates a single 16 color palette with a length of 0x20. And then 0x8000 is the size of the image which is probably 256 x 256 with 4 bits per pixel. At 0x08 there is a word that I have no idea what the value is for. I've seen a few different TIM images and they all seem to have a different number. Not sure if this is flags for if the file is compressed or not, or how many bits per pixel the image is. This seems like one of those cases where it's nice to know what something means but is not essential. The words at 0x0c and 0xe are probably the Framebuffer X and Y positions for the palette. And the words at 0x10 and 0x12 are pretty clearly the number of colors in each palette and the number of palettes. The headers for TIM images and TIM palettes are the same up to this point. And TIM palettes don't have values from 0x14. For the word at 0x14 we probably have the image framebuffer X position and at 0x16 we have the image width. It looks like it's often labelled 0x100 for 256, so it looks like the game doesn't adjust the width value depending on the bits per pixel. I think in retrospect the game was defining width in terms of bytes per row, but it's a lot easier to use the width directly and I think the programmers adjusted that. At 0x18 we have the Image Framebuffer Y position and at 0x1a we have the image height. The last value is at 0x24, and I think this is best described as the dictionary size. Originally I was thinking this is a pointer to the Compressed payload. But the value itself is a length relative to the start of file after the header. So it's probably best to describe this as the Dictionary length. Using the image above as a reference we have the gray padding before the start of the file in a .BIN archive. At a given 0x800 offset we have the start of the TIM file with a header with a size of 0x30. After the header we have the dictionary for the compressed data, and after the compressed data we have the compressed payload. The end of the payload is defined by the bytes 0xFFFF so you can find the start and end of the data. And following that you have padding until the next file in the archive or the end of the archive file. Edit 2:We can compare the PC image for ST0FT.DAT to the PSX st0ft.bin first image. Since it's probably a straight port the files are probably the same. So we can use this to confirm the predicted values for the PSX header. For the palette X we have 0 which matches our prediction and palette framebuffer y is 249 which matches up with 0xf9. For width and height we have 0x100 and 0x100 for 256x256. And the image framebuffer is 384, 256 which actually matches up with 0x180 (@ 0x14) and 0x100 (@ 0x16). Which means we do have a fix. The order is Image Framebuffer X, Image Framebuffer Y, Image Width (adjusted for BPP) and Image height. So it does do the same thing as MML1. I think what that effectively means that if we can decompress the data we should be able to pass it into the same function we have from MML1. For the header format we have Offset | 0x00 | 0x02 | 0x04 | 0x06 | 0x08 | 0x0a | 0x0c | 0x0e |
---|
0x0000 | 0x03 (TIM enum) | 0x00 | Size of Decompressed Data | 0x00 | Number of Sections※
| 0x00 | Palette Framebuffer X | Palette Framebuffer Y |
---|
0x0010 | Palette Color Count | Number of Palettes | Image Framebuffer X | Image Framebuffer Y | Image Width (in bytes) | Image Height | 0x00 | 0x00 |
---|
0x0020 | 0x00 | 0x00 | Dictionary Size | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 |
---|
※ A section is defined by 2048 bytes. So this word is an int value that describes the size of the compressed file. Credit to OmbraRD on discord for this.
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Jul 20, 2020 15:20:54 GMT -5
Haven't had any luck with decompression. I guess the good news is that this kind of validates my confusion from before, but it'd be better to have the files decompressed as opposed to be validated. One thing that I definitely skipped on before is taking notes and describing the process. So let's go ahead and start documenting the format and we can work from there. The file I'll be working with as an example is st0ft.bin and I'll be using the first image that appears in the file at 0x1b000. We've compared against the PC version and the image should be of the following: What do we know so far? The game is using LZSS compression which is a pretty commonly used format in several games, after Haruhiko Okumura (engineer from Toshiba) published example code under public domain in 1989. The way it works is there is a bitfield and a compressed payload. For the bitfield it's exactly as what it sounds, we have a series of bits. We'll look at the following hex. In this image we have the header and the start of the bitfield. With respect to compression, in the header there are two values that we're interested in. The 0x8020 value at 0x04 is the decompressed size of the data, and 0x4b0 at 0x24 is the length of the bitfield. As for the bitfield it starts 0x3880 and we want to pretty much treat this as a binary string 0011100010000000. And basically we go through one bit at a time. A 0 indicates a "copy the value directly to the output", and a 1 indicates a "go back and copy bits" value. Note that bits could either be from high to low, or from low to high. And they could be one byte at a time high to low and low to high. So we have four possibilities. And then here we have where the compressed payload starts. The TIM file starts at 0x1b000. The bitfield starts at 0x1b030. And the length of the bitfield is 0x4b0, so the start of the payload is at 0x1b4e0. As mentioned with the bitfield each word in the payload is either a value to copy, or an instruction to go back and copy. Each bit in the bitfield should have a corresponding word in the payload. So if we multiply this out, the length of the bitfield is 0x4b0 bytes, so if we multiply that by 8 we get 0x2580 bits. And since we should have one word for each bit that's 0x2580 * 2 bytes per word = a payload length of 0x4b00. So if we add 0x4b00 to the start of the payload at 0x1b4e0, we get that the payload should end around 0x1FFE0. And that's roughly what we see, we have a small amount of padding right before a TIM palette is declared in the file after the TIM image. We can go ahead and simulate how we expect this to work. As inputs we have the bitfield and we have the payload. For the output we go ahead and prepare a buffer which has the size as declared in the header. Since there is nothing in the output buffer we have nothing to go back and copy against, so to start out we expect to have a series of zero values to simply copy words into the out buffer. We copy bits until we come across a 1 bit. A 1 bit means that the corresponding word is a "go back and copy" instruction. In that case we split the word into two parts 3 bits should be the length of the words to copy and the other 13 bits should be offset from the start of the "window". To start off with, the window should be the start of the out buffer. But since the offset only has 13 bits the largest possible that we can make is 0x1fff. Which is where the 0xffff value comes in. Where we have a 1 bit and then 0xffff as the word value, what this does is it incremented the "window" by 0x2000 bytes. So the window is basically the point of reference for the "go back and copy" instructions. We start at the beginning of the file, but as the length of file increases, it changes where we might want to go back and copy from. So we need to move the reference up to keep up with the position of the output. So what's going wrong? There are a lot of things that can go wrong. First we don't know the order of the bitfield. For example the first word value is 0011100010000000, having two 0 bits before the first 1 bit doesn't seem like enough room, so it's possible that the bit values are from low to high instead of high to low. And then for the instructions we don't know if it's the top 13 bits or the bottom 13 bits. So what we need to do is basically take the first 16 bits and the first 16 words and try to figure out what combinations are even possible. If you have the wrong bit or the wrong instruction it's really easy to get bad values that don't make any sense and cause out of bounds errors.
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Nov 11, 2020 10:51:55 GMT -5
I think I completely forgot to follow up here. I managed to get the decompression working. And the code went like this: const fp = window.buffer;
// First we need to decompress the data
const bitField = [];
let ofs = tim.offset + 0x30; for(let i = 0; i < tim.bitfield_size; i += 4) { let byte = fp.getUint32(ofs, true); ofs += 4;
for(let k = 31; k > -1; k--) { let bit = 1 << k; bitField.push((byte & bit) ? 1 : 0); } } tim.buffer = new ArrayBuffer(tim.decompressed_size); tim.fp = new DataView(tim.buffer);
// Then we decompress the bytes
let out_ofs = 0; let window_ofs = 0;
for(let i = 0; i < bitField.length; i++) {
let bit = bitField[i]; let word = fp.getUint16(ofs, true);
if(bit === 0) {
tim.fp.setUint16(out_ofs, word, true); out_ofs += 2;
} else if(word === 0xffff) {
window_ofs += 0x2000;
} else { let val = ((word >> 3) & 0x1fff); let copyFrom = window_ofs + val; let copyLen = (word & 0x07) + 2;
while(copyLen--) { let w = tim.fp.getUint16(copyFrom, true); copyFrom+=2; tim.fp.setUint16(out_ofs, w, true); out_ofs+=2; }
} if(out_ofs >= tim.decompressed_size) { break; }
ofs += 2;
}
I made a tool for it here: gitlab.com/megamanlegends/megamanlegends.gitlab.io/-/tree/master/public/utils/mml2_psx_texAnd the code for decompression is in this file: gitlab.com/megamanlegends/megamanlegends.gitlab.io/-/blob/master/public/utils/mml2_psx_tex/js/main.jsAnd here's a tweet back from when I did the stuff and things (wtf happened in 2020?) After that I meant to go back and try to make a simple tool for viewing textures, or maybe add code into the PSX MML2 viewer to be able to add textures to the models. But 2020 has had other plans. Trege PM'd me about updating the MML2 PC models to include animations. So I'll have to update the code so that the tool can export gltf/dmf.
|
|
|
Post by nodespaghetti on Feb 22, 2021 21:55:31 GMT -5
Hello. This is my first post here I think I may be a little late, but I can help with anything Blender/Python related. Really impressive stuff on this thread!
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Mar 14, 2021 22:46:30 GMT -5
Hello. This is my first post here I think I may be a little late, but I can help with anything Blender/Python related. Really impressive stuff on this thread! Hi thanks, I have a really hard time working with the Blender Python API. I'll post more of my rants with respect to working with Blender here to try and get some help with that.
-----
In terms of the forum topic, I was tempted to start a new thread to start with a blank slate, but seeing as this thread is still on the first page, I might as well use it and keep posting here. It's been a little while since I've looked at Megaman Legends 2, and it was mostly looking at the grouped enemy / entity models packed into each scene. I haven't unpacked the Megaman player model or animations, or looked at the stage files at all yet.
Taking some lessons from Megman Legends 1, I kind of want to try a different approach with Legends 2. And that's to break the archives down into individual files to work with. That way we can hopefully avoid working with save states. But I think in the case of save states, we can make a small improvement on that front by allowing for multiple save states to be saved and selected from, as opposed to forcing one save state. But we'll try the clean approach first before we get too dirty.
So right now what I want to do is generate a list of all of the individual files in the game. If they are copied into memory, or copied into vram. Where they are copied into memory, or vram. In terms of memory this should give us an idea of where to start poking around in memory. If two files are copied into the same place in memory, then only one or the other can be copied. Likewise with vram, we can kind of build a map of which textures and palettes get copied into where to start figuring out the relationships of which models have which textures, and which palettes.
Generally my hope is that should give us a specific sets of locations in memory to focus on, to get that mapped out so we can start poking away.
I was thinking about starting with the COMMON folder. But in general that's probably a terrible place to start. Should probably start with the DAT folder to get the scenes and entities and then fit the COMMON folder into that.
|
|
|
Post by tsuyoiraion on Apr 24, 2021 18:07:55 GMT -5
I was thinking about starting with the COMMON folder. But in general that's probably a terrible place to start. Should probably start with the DAT folder to get the scenes and entities and then fit the COMMON folder into that.
Very nice work so far, looking great! I was thinking since the files are pretty much the same across the psx & pc version, there should technically be a way to find the files that contain the english audio or game text and cross them over, no? I know that the psx files are .BIN and the pc files are .DAT, which if interchanged don't actually work and crash the game (i.e copy TITLE.BIN to install folder and delete TITLE.DAT, also tried rename .BIN to .DAT), obviously it's not meant to work that way but was worth a shot however i still think the idea is sound, technically we should be able to interchange the english audio for the japanese somehow. Also i saw a post yesterday about a debug tool for Rockman Dash 2 but for psx, which enables you to switch to english for text, perhaps there's something we can to the PC version that's similar? Any thoughts on this? Wishful thinking but i'd love to get some subtitles for the cutscenes that way we can have japanese audio with english text, however i understand if that's outside of the realm of possibilities.
|
|
kion
Arukoitan
@kion_dgl
Posts: 193
|
Post by kion on Jan 15, 2022 21:03:05 GMT -5
Taking notes here: gitlab.com/megamanlegends/sulphur-bottom/-/wikis/Data-Type:-Entity-MeshI think I might use this forum for taking notes, and then have the final information in the wiki. Right now there are a bunch of unknown attributes in the mesh header. When I get home tonight I'll write some scripts to look at them. I was thinking about starting with the COMMON folder. But in general that's probably a terrible place to start. Should probably start with the DAT folder to get the scenes and entities and then fit the COMMON folder into that.
Very nice work so far, looking great! I was thinking since the files are pretty much the same across the psx & pc version, there should technically be a way to find the files that contain the english audio or game text and cross them over, no? I know that the psx files are .BIN and the pc files are .DAT, which if interchanged don't actually work and crash the game (i.e copy TITLE.BIN to install folder and delete TITLE.DAT, also tried rename .BIN to .DAT), obviously it's not meant to work that way but was worth a shot however i still think the idea is sound, technically we should be able to interchange the english audio for the japanese somehow. Also i saw a post yesterday about a debug tool for Rockman Dash 2 but for psx, which enables you to switch to english for text, perhaps there's something we can to the PC version that's similar? Any thoughts on this? Wishful thinking but i'd love to get some subtitles for the cutscenes that way we can have japanese audio with english text, however i understand if that's outside of the realm of possibilities. PC and PSX both use .DAT files, which are basically archives. But the way these files are arranged is different. PSX has no index at the top. Each file type has a fixed length header, and then there is padding between each one of the files. This was probably done is a way so that the files could be sequentially read from the disk by the PSX. PC is a little different in that there is no padding, everything is crammed together one after the other, and there is a header at the top which gives the offsets for where everything is. A non-technical comparison would be like putting your clothes inside a backpack or a suitcase. It's the same content, just in a different container. Other than that the contents of the files inside the DAT files are mostly similar if not the same (aside from textures). It might be possible to post the text from Megaman Legends 2 PSX to Megaman Legends 2 PC. Though I think porting to the PSP Megaman Legends 2 version might be the easier alternative. The files will probably be the same, and the game is already in widescreen and would run on Android and stuff with emulators.
|
|