Building a Better Higurashi

higurashi3d

Hey everyone, Doddler here. I wanted to talk a little bit about my work on the new release of Higurashi Chapter 1.

Where it started

Back when steam released their Greenlight platform, we put up Go Go Nippon and Higurashi almost immediately. It seemed like a sure bet, bringing visual novels to a wider audience. You might not remember it well, but it wasn’t so long ago that a visual novel being on steam would be completely unthinkable.

Originally, the plan was simply to throw up the existing release to the steam community see where it goes. But once Go Go Nippon was released, everything changed.  The game was a huge success, well beyond what we had thought possible for the game. It had the advantage of being one of the first few anime / visual novel titles on the platform, but really it just proved that there was a real opportunity.

When Higurashi was greenlit, someone on the staff asked a simple question. “Since GGN did so well, maybe it be possible to get the console art and/or voice?” It was a simple question really, but it sparked a debate on what we could do for the game. We could just release the game as-is, after all, it was already in English and has been available for years. But we felt there was an opportunity to do more. We were given the chance to do better.

So Evospace set out on the Japanese side, there was some asking around about the different versions of the game and the updated assets. There was no luck there, but he had friends at various studios, and we were able to contract an artist to do new art for the English release. We reviewed the translation and found it was below our current standards, and so we would re-do it. The original English release was a port done by Overdrive on the Buriko/BGI engine, but I had already done a port of that platform before, so we could use that as a starting point and improve the game engine as well.

It would be a first for us, to do original development on a project. We set out to build a better Higurashi.

Building the game engine

Higurashi isn’t like most games we work on, as the game already has a cult following of incredibly devoted fans. In Japan there’s been multiple releases across multiple platforms, and fan projects have stitched together the best pieces to make their own definitive versions. So for us to release an official ‘definitive’ version, it would mean somehow satisfying all those fans.

While it wasn’t possible for us to license all the assets people had hoped for, but what we could do was make a solid release and make it easy for people to do what they want with their copy of the game. With that in mind, I set out to build a new version of Higurashi.

The engine design

A visual novel engine generally is composed of a few primary components.

  1. The scripting system
  2. Scene and display system
  3. User interface
  4. Other core systems (asset handling, audio, state system, etc)

I’ll go into detail about each system and how it works and how I set it up.

The scripting system

At its simplest level, a visual novel is a script, which is a sort of screenplay that directs all the onscreen action and describes what should happen and when. When doing a port of a game, I effectively take the existing scripts and build the systems around it to interpret it in a way that makes it appear the same as the original. The game will make its way through the script, pausing or waiting as necessary, making everything run as you’d expect.

The original Higurashi was written in NScripter, but I opted to start with the port that Overdrive did on the Buriko/BGI engine. Simply, I had already written a BGI interpreter as part of my work on d2b vs Deardrops, so it was easier to use that as a starting point for the new version.

The scripts look something like this:

void main()
{
	FadeOutBGM( 0, 1000, FALSE );
	FadeOutBGM( 1, 1000, FALSE );
	FadeOutBGM( 2, 1000, TRUE );

	DisableWindow();
	DrawScene( "white", 400 );
	PlayBGM( 1, "msys01", 128, 0 );
	DrawScene( "bg_108", 3000 );
	DrawBustshotWithFiltering( 1, "me_se_wi_a1", "left", 1, -160, 0, FALSE, 0, 0, 0, 0, 0, 0, 1300, TRUE );

	OutputLine(NULL, "「あははは! どうだった圭ちゃん!」", NULL, ""Ahahaha! How was it, Kei-chan?"", Line_WaitForInput);
	OutputLineAll(NULL, "n", Line_ContinueAfterTyping);

This is the original BGI script format, with a few minor changes added to allow for different languages. It mirrors C code, though my interpreter only really supports the functions required to run the script rather than the full set of functionality.

Interpreting a script like this is not as easy as you might think at first glance. In order to really make use of that, you need to apply some compiler theory and convert it into an easier to use format. This involves interpreting the original text and converting it into a series of symbols that can be consumed by your code, called an abstract syntax tree (AST). While you can do all the hard work and build the code to make your AST manually, I chose to use a freely available tool called Antlr, which will generate the code to create an AST by providing it a set of grammar rules. It’s all incredibly messy so I won’t into detail about that, but after you get it all working, the Antlr code will spit out an AST tree that looks like this:

(BLOCK main
	(OPERATION FadeOutBGM (PARAMETERS (TYPEINT 0) (TYPEINT 1000) (TYPEBOOL FALSE)))
	(OPERATION FadeOutBGM (PARAMETERS (TYPEINT 1) (TYPEINT 1000) (TYPEBOOL FALSE)))
	(OPERATION FadeOutBGM (PARAMETERS (TYPEINT 2) (TYPEINT 1000) (TYPEBOOL TRUE)))
	(OPERATION DisableWindow)
	(OPERATION DrawScene (PARAMETERS (TYPESTRING "white") (TYPEINT 400)))
	(OPERATION PlayBGM (PARAMETERS (TYPEINT 1) (TYPESTRING "msys01") (TYPEINT 128) (TYPEINT 0)))
	(OPERATION DrawScene (PARAMETERS (TYPESTRING "bg_108") (TYPEINT 3000)))
	(OPERATION DrawBustshotWithFiltering (PARAMETERS (TYPEINT 1) (TYPESTRING "me_se_wi_a1") (TYPESTRING "left") (TYPEINT 1) (TYPEINT -160) (TYPEINT 0) (TYPEBOOL FALSE) (TYPEINT 0) (TYPEINT 0) (TYPEINT 0) (TYPEINT 0) (TYPEINT 0) (TYPEINT 0) (TYPEINT 1300) (TYPEBOOL TRUE)))
	(OPERATION OutputLine (PARAMETERS TYPENULL (TYPESTRING "「あははは! どうだった圭ちゃん!」") TYPENULL (TYPESTRING ""Ahahaha! How was it, Kei-chan?"") (TYPEINT 2)))
	(OPERATION OutputLineAll (PARAMETERS TYPENULL (TYPESTRING "n") (TYPEINT 3)))

Actually interpreting the scripts and building a usable format in memory is actually somewhat slow. Processing all the scripts in Higurashi at once takes about 10-15 seconds on my PC, and it can be slower on older computers. In order to simplify the process, the AST is converted into a binary format that can be easily used at runtime without any pre-processing. When you play the game the file is simply loaded into memory, and by maintaining a pointer into the binary file of your current position you can read the commands as necessary to advance the script.

In a hex editor, it looks like this:

higurashibinary

That looks a bit daunting, but the interpretation is pretty simple. Take a look at a small snippet:

higurashioperation

It’s really quite easy to understand when you lay it out like that, it’s basically an output of the AST, with the line numbers added and no markers for the parameters. It’s essentially just a list of symbols. The first two bits designate how the next value will be interpreted as, then followed by the data. At the root level you have a set of symbols such as ‘line number’, ‘function call’, ‘operation’, ‘if’, and so on, and then within the context of an operation the symbols indicate things like integer, string, other function calls, or even math operators. So you can put in math or function return values within the parameters of a function and it will actually calculate everything at runtime, though that’s not widely used in either of the BGI games I worked on.

If you were wondering why the line number bits are in there, it’s so that the game can keep track of what you read, and restoring a save game will return you to a designated line. Calling them line numbers is a bit misleading, since they actually indicate the current command number within the file, so in this example this is the 3rd and 4th operation of the file. See rather than restoring your position to a specific binary position in the file, which would crash your game if the script file was updated (something you might recall being an issue with Overdrive’s games), there’s a table of where each line is in the file so it can jump back to where you left off. Only by changing the number of lines in the file will loading the game put you at the wrong position, and even then it will still put you at a valid position, even if it’s not the right one.

Anyways, each command encountered while running the game has a special function within the engine itself that will be called when reached. Those functions pull the parameters in from the script and then perform whatever actions they need to do their intended task. Here’s an example of the DrawScene function:

private BurikoVariable OperationDrawScene()
{
	SetOperationType("DrawScene");

	var texture = ReadVariable().StringValue();
	var time = ReadVariable().IntValue()/1000f;

	if (gameSystem.IsSkipping)
		time = 0;

	gameSystem.SceneController.DrawScene(texture, time);
	gameSystem.ExecuteActions();

	BurikoMemory.Instance.SetCGFlag(texture);

	return BurikoVariable.Null;
}

That’s what happens when the script hits the DrawScene command. It’s all relatively straight forward once the hard stuff is taken care of by the compiler isn’t it?

An interesting bit worth looking at is the ‘ExecuteActions’ call. DrawScene always executes as soon as it’s reached, but many commands can queue up actions that are executed only once the final go is given. That way you can synchronize a bunch of different actions to occur at once.

As an example, if you wanted to fade to a scene that has a character already drawn in a specific pose. You couldn’t use a DrawScene command and then add the character after, since you’d end up waiting for the scene to fade in before drawing. Likewise, you couldn’t just set the pose before DrawScene, since not only will DrawScene clear all the visible objects in the scene, you’d cause the character to appear before the transition! So to get the result you want, you’d call a DrawBustshot command with the execute flag set to FALSE, and then the draw DrawScene command runs it will execute the stored actions, causing the DrawBustshot operation to occur on the new foreground scene that’s being faded in.

Normally when releasing a game you would only include the compiled script files; after all, they’re all that’s required to make the game work. I recognized however that users would have a desire to make changes to the game script. Higurashi has been translated to several different languages, and there are other changes I was aware users may want to make, such as adding in voice work. To make this easy for users to do, I chose to include the compiler directly into the game itself. If it detects that a newer uncompiled script exists, it will automatically compile when you launch the game. Originally I chose to exclude the game flow scripts and only include the actual scene scripts, but I had a change of heart and included all scripts now with the game for you to dig through.

Scene and Display System

The visual operations within the BGI engine are a bit different than other engines I worked with, such as the kirikiri engine, or even Navel‘s or Innocent Grey‘s engine. Those games operate on a scene switching system, where both a visible foreground scene and a hidden background scene exist, game objects are then arranged onto the background, and then the background is revealed and becomes the new foreground. In that system, layers are placed on one of the two scenes, and display is swapped between them when ever something new needs to appear on screen.

BGI takes the other approach, where layers can be directly, without any need to change the visible scene. Individual layers can be moved, faded in and out, or masked in various ways and independently of each other. It gives you more power, but it’s also more complicated and technically expensive.  That’s because the scene swapping system takes care of one of the more annoying things you have to deal with: transitions.

To do a smooth crossfade between two images, you simply interpolate between the source and destination. That’s easy if you have two opaque images you’re fading between, but if your images have transparent sections, you’re going to need to ‘flatten’ (in the photoshop sense) the layer with what’s below it before you can do the transition. This is easy in a scene swapping system because you already have two opaque render targets that you can transition between, but it’s more complicated with individual layers because you need to sample the colors below the layer to figure out what you’re transitioning from and to. I handle this using a custom shader which samples the current display buffer to find out the ‘before’ and ‘after’ state to transition between.

The layer crossfade shader, made in the ShaderForge visual shader tool.

The layer crossfade shader, made in the ShaderForge visual shader tool.

In a game like Really Really, the visual system involved two distinct scenes with their own layers. After a transition occurred, a ‘backlay’ would happen, which would copy the state of the foreground to the background scene where you can make changes for the next transition. Cartagra and Kara no Shojo 2 is much the same, except those have the distinction between foreground and background layers. BGI can still do full scene transitions like the other engines, but doing so clears existing layers, so your regular transitions and sprite updates are made directly to the visible layers.

Unlike a strictly scene transition type engine, BGI’s systems allow you to have multiple animations with different timing to occur, or even have animations span multiple frames of dialog. In Really Really, you might not have noticed but no animation spans more than one dialog box. Cartagra could do it by using special foreground layers that weren’t affected by scene transitions. I don’t really think any of these things are used in Higurashi, but they’re part of the engine none the less.

scenecomp

My implementation of the BGI system uses a few different components, above is a screenshot of how that appears in Unity’s scene hierarchy. There’s two cameras, one for each of the two scenes. There’s a pool of unused layers. There’s the black bars that obstruct out-of-bounds objects when you are in fullscreen mode. And there’s a panel which contains all of the currently visible layers. There’s also a panel for upper layer images (like faces), which is rendered by a separate camera above the UI and not affected by scene transitions. Higurashi doesn’t make use of the face layers however.

Actually, the upper layer reminds me that I should probably explain the rendering pipeline for the game engine. Scene rendering is handled via unity cameras and the built-in layer masking. Objects placed in the scene are designated to be rendered only by a specific camera to ensure the scene is composed correctly. Some cameras can have their own shaders as well for post-processing to apply special effects or to create transitions. The first thing that is rendered is the active scene camera. It renders the scene background and any character layers that are drawn. There are two scene cameras, normally only one is active at a time. During a scene transition, the background camera is enabled and rendered first, and then the foreground camera is rendered with it’s own shader which provides the fade out effect before being applied to the screen buffer. Then the text window and game text is rendered via it’s own camera. Then the upper layer (face, overlays, etc) is rendered above the UI. Finally any other objects that are instantiated (options menu, save/load screen) are drawn.

When the script creates a layer, the layer is taken from a pool of unused layer objects and placed in the scene. It’s render settings are adjusted to be visible to the current active scene camera, and then it’s shader is assigned and configured to the correct one for it’s display type (usually fade in for it’s first appearance). All layers have the same size in BGI (or at least that’s something Overdrive did), so all layer meshes match the screen resolution in size, in this case 640×480.

Also the article's title image! It shows that the scene is in fact in 3d space. You can see the black bars on each side, and the UI being displayed in front of the game elements.

Also the article’s title image! It shows that the scene is in fact in 3d space. You can see the black bars on each side, and the UI being displayed in front of the game elements. Also shown are the text window margins and the colliders used for the UI.

Interestingly, even though all meshes are 640×480 in size and the camera’s viewport size is 640×480, that doesn’t mean the textures need to be that size, nor does the output from the camera need to be that size. If you’re trying to have a pixel perfect display, then you would need to match texture size to output size, but I realized once I had very high resolution images of the new sprites that I could actually apply higher resolution images and still have it look good in game. This also had the added bonus of allowing the game to scale up to higher resolutions without looking bad.

After a bunch of experimentation, I found that as long as the texture was a power of 2 larger than the screen size, the graphics adapter would draw it just fine when scaled down. Actually it was pretty good at any size, but power of two gave the best results. So I settled on using character sprites that are 1280×960 in size. If you go full screen or pick a higher resolution than default, it should still look good. In case you’re interested, the original sprites we got from the artist were 3000×2255! It’s traditional though for artists to draw in twice the size as intended, so I took the intended resolution then I scaled to the closest power of two.

Speaking of asset pipeline, I built a system that would automatically pick files from one folder or the other depending on the in game settings. I used this as the basis for swapping between the original and new sprites. In addition, the game would automatically load files which had _jp at the end of the file name if they existed if you were playing in Japanese text mode. I also implemented a method to force the game to reload all used textures, which when combined with the above would allow you to swap in real time between both modes. It breaks completely if you try doing a reload while a transition is occurring or a sprite is being moved, so originally I limited it to only be available from the options menu, but I patched in a shortcut key (the “P” key) to swap at any point where the game was not busy with an ongoing transition.

To help with modding, I chose to load the images externally rather than load them through Unity’s asset system. This is probably a little slower, but I knew for certain that users would want to replace the in-game sprites with their own, so leaving all the assets open makes it incredibly easy to do so. Just drop in your own images with the right filenames and in 4:3 scale and watch it go!

The User Interface

The user interface was a bit of a point of contention. I knew people weren’t overly fond of the user interface in the original release. Not everyone knows why it has that UI though. The original Higurashi release in Nscripter actually had no UI for config, save/load, backlog, etc., everything was handled via window dropdown menus or just plain text. What UI exists was reproduced faithfully, which is the title screen, the chapter screen, and the tips screen. If you’ve actually played Overdrive’s release of Kira Kira, you’d actually see that the other UI features were simply reskinned UI elements from that game made to look like the existing Higurashi UI.

We actually made some attempts to try out some other UI options. We tried different theming, including a red color theme and a black one. We did some experiments with different style buttons, but we weren’t really able to come up with a new UI that would suit the game. So instead of going that route, we focused on taking the existing UI we had, and worked to improve it in ways we were able to.

One of the complaints I saw were that the UI was always in the way–the original had no distracting on screen elements. As such, I implemented a way for the user interface to automatically hide so it wasn’t always visible. The UI also didn’t scale very well at all, so we went through the same process with the UI as I did with the game assets. Vodoka whipped up some good looking high resolution versions of the user interface elements, and I put together the UI elements to target 1024×768 and scale down for lower resolutions.

sdf

Using SDF font rendering, you can scale the text to crazy sizes without much in the way of serious distortions.

The text however was one place we could drastically improve. After all, a visual novel is primarily played by reading, so if that experience is poor, then your whole experience would be bad. Unity’s text rendering system is a bit of a mess, often buggy, and you can’t apply any real effects to it. A common workaround is to create a font ‘atlas’, where all the characters are written to a texture, and then drawing text by simply referencing the proper location in the texture. That’s how I did text in Really Really and d2bvsdd, I used a Unity tool called NGUI to create a font atlas for the intended font and then I modified the atlas to add the outline and drop shadow.

For Higurashi, I used the same system for text that I did with Cartagra, using the TextMeshPro (a 3rd party asset for Unity) to create scaleable text. TextMeshPro uses what’s called a Signed Distance Field (SDF for short), a technique pioneered by an employee at Valve to generate great looking, scaleable text. It works by generating a texture which instead of storing an image of each character, it stores in each pixel the distance from the closest edge. Using this information, you can then using some nifty math easily re-create the edges of the text on a much larger scale than you normally would be able to. I guess that’s not a very good explanation, but basically it means you create an atlas using an SDF generator, you get really good looking text. As an added bonus, it also lets you do all kinds of nifty things with the text such as drop shadows and outlines directly in the shader. The author of TextMeshPro has helped immensely by working in support for Japanese style word wraping and fixing various bugs I encountered. If you work in Unity you should check it out.

This is the font atlas used in Higurashi.

This is one of the font atlases used in Higurashi.

Using an SDF font really shines when you have a small set of text characters you need to display… which was kind of the opposite of the case with Higurashi, where the font needs to contain all the Japanese characters used in the game. There’s a lot of them, and it results in a pretty cramped atlas. The final atlas was generated at a size of 26pt, with a spacing of 5 pixels (spacing is important to not have characters spill out accidentally), and it barely fits on a 2048×2048 texture. Still, the performance is solid, with only a single draw call to render, and even though the SDF is generated at 26pt, it still looks really good when scaled up even at full screen.

There are some downsides to using TextMeshPro to draw text. One big issue I ran into was that I didn’t have an easy way to create an editable text field, as there’s no built in functionality. In Cartagra and the upcoming Kara no Shojo 2, I used the built-in Unity text handling to take care of text input… but Higurashi‘s UI is unique in that it scales down at lower resolutions. The SDF font used looks fine when scaling to any size, but a regular font is unreadably ugly when scaling down at the sizes needed. Rather reluctantly I ended up removed the save naming functionality from the release due to this problem (in case you wanted to know where it went!).

Here’s an example of an image atlas, this one being the one for the save/load screen. It contains all images used on the screen excluding the background. It looks messy, but it’s automatically created so it’s not hard to work with.

Of course, text isn’t the only place where texture atlases were used, it was made use of across the entire game UI. Draw calls, where the graphics card must switch between textures to render different objects is one of the most expensive graphics operation. Even though modern graphics adapters can do possibly thousands of draw calls and still maintain a solid frame rate, people expect a visual novel to run on a toaster (figuratively), so I try my best to make that possible.

To reduce drawMost UI elements of the same type are atlased together, greatly reducing the number of draw calls. You’d think this would be annoying to do, but it’s handled automatically by Unity, so it’s not a big deal. We manage to get by with very few draw calls, with text on screen, the UI shown, and two characters visible, I have as few as 9 draw calls. Even in the most crowded situations, such as the save/load screen with a dialog prompt visible, usually draw calls don’t go above 25. Not bad!

Modifying the UI within the Unity editor.

Modifying the UI within the Unity editor.

The different UI components themselves are just Unity prefabs. Though I’ve transitioned to Unity’s UGUI system in newer releases, Higurashi mostly uses NGUI to handle the user interface. I place and arrange UI elements on screen in the editor, assign scripts and adjust as necessary, and then save it as a prefab. During play, when a screen is opened, it is instantiated from the prefab and draws itself with it’s own camera above other elements. The only exception is the primary UI, which is always present in-scene, and simply has it’s alpha scaled to make it invisible when it’s made to be hidden.

The real benefit to working with the UI this way is that I can adjust everything on screen visually and make changes even while the game is running. The downside is that unfortunately unlike most of the game, it’s difficult to modify. Sorry modders! 🙁

Audio

Unity actually doesn’t support loading audio files via an external source, only from within the game project, which ended up being a bit of a challenge. Back when I was working on Really Really, I found a really handy c# open source project called NVorbis, which handles loading ogg files and retrieving audio samples. The project maintainer graciously fixed some bugs I encountered with it as well. The audio samples are then piped into Unity’s audio system by creating a custom DSP filter to play the supplied audio samples. Unity’s audio filter pipeline is full of obscure bugs though (and some aspects just plain don’t function at all) so it wasn’t easy to get working, but I think I have a pretty stable set-up now. There are some pretty specific requirements about the minimum length, sample rate and channels though, so if you saw the note I left in the StreamingAssets folder about what kind of audio you can play back, that’s why!

The Game System

Tying everything together is the Game System object. All of these systems, the script, scene, ui, and audio systems won’t do anything without a unifying object to make everything run together. The game system manages creation and initialization of the other systems, and also contains two primary components needed for the game to run– the state stack, and the wait system.

The state stack is the way in which the game switches between different states of execution. The game has one current game state which is the active state, and a stack of previous game states that are suspended. When you enter a new game state, the current one is put to the top of the stack and the new state takes over. Leave a state and the previous one returns to become the active state. Each game state has a set of functions that are called when the state is first invoked, ongoing each frame, and again once a request to leave the state was made. Each state handles it’s own user input, and the user interface elements are unique to a specific game state, so if you entered say the options menu, you can no longer interact with the regular game elements, and so on.

The game defaults to what is called (rather unoriginally) the normal state. In this state, the game script executes automatically until it reaches a wait (more on this a bit later), and the standard controls and shortcuts for operating the game are active. The game plays as you’d expect. From this state, the game will switch to another state depending on your actions. If you right click for example, the game switches to the menu state, where the right click menu object is instantiated and the normal state is put on the stack. If you click the button then to go to the save screen, the menu state leaves and the save screen state is added, which instantiates and prepares the interface for that screen. When you leave the save screen, the state cleans up it’s resources and returns the last game state on the stack, returning you back to game play. Through this, you can switch between any mode of execution.

The wait system is how the script controls execution of the novel system. At it’s simplest, the novel only executes when there is no existing wait. There are all kinds of different waits… waiting for input (waiting on the user to continue), waiting for text (the on screen text is currently typing), waiting for loading (the game is loading a resource it will need), waiting for time (the script requested a delay), and a few other random ones. When a wait is created, you can also add an action that will execute when it’s done. Some of these are timed and expire automatically, and some require they be cleared by an external means.

So for example, the game script creates a sprite on screen, and sets it to fade-in the sprite over one second. The on-complete action for the fade-in is to finalize the fade. When the wait is added, the game will then stop execution of the game script until the time specified has elapsed. If for some reason an action occurs that clears existing waits (you clicked the mouse), the on finish action will be executed and the sprite set to it’s final position. Then perhaps the script says to display text to the screen. A wait for text wait is added with the duration the text will take to display (which if you were wondering the default speed setting is 0.025s per character to start fading in English and 0.05s in Japanese, with 0.15s for it to fade in each letter), and additionally a wait for input is added. When you click the mouse, if a wait for text exists it will clear it, causing all the text to finish displaying. Another click will clear the wait for input, and the script will then advance.

Not waiting is just as important as waiting though. The game will execute the game script as fast as possible until it hits a wait. You can be certain that consecutive actions will execute on the same frame as long as you don’t tell it to wait until the last action you want to run.

Interestingly, when you use text skip, it simply tells the game to clear existing waits each frame. I actually experimented in just not creating waits, but that actually skips too fast, where text doesn’t become visible at all. Auto mode creates a wait for time instead of a wait for input.

All the script flow is handled by the waits, and all game execution is handled by the state stack. With everything tied together, you get a working visual novel!

Conclusion

I know it’s been a bit of a rambling ride, but I wanted to describe the process of porting a game in as much detail as possible. A visual novel is not always as simple as people expect them to be, and there’s been a lot of challenges to finish bringing these games to the intended audience. I hope you enjoyed this window into the technical side of bringing the game to you, and hopefully I’ll see everyone next time, probably after I finish the port for Kara no Shojo 2!

Bookmark the permalink.

8 Comments

  1. Wow, how fascinating! Visual novels may seem simple on the surface, but there’s actually a lot to it. Thanks for describing it all to us and, of course, for developing the engine!

  2. That was very interesting. Thank you for doing that and for writing about it!

  3. I think that has to be the longest blog post I’ve seen so far on this site… :-O

  4. One of the things I’ve lamented about Visual Novels in general is the lack of thought put into the engines by the original developers. Lately in Japan E-mote has been popular, but I’ve seen it simply be a quick patch job and it ends up looking horrible when scaled, for example.

    So on this front and as a programmer myself I am glad at least MG found you as someone to look into the more technical side of things. At the very least, in Japan some companies like Windmill and Lump of Sugar have been hiring talent with knowledge in Unity.

  5. Now that the encryption on the PS3 assets was finally cracked (as of yesterday) the /jp/ thread was wondering if you could add in the possibility for us to modify the game to 16:9 to make use of all of the assets for the mod patch.

  6. It’s very nice to have all the sources available for mods but why is Steam support for mods (that is community center) not enabled?

    • There’s a few reasons. The main one is that mods can’t play nicely with each other, only one can technically exist at one time in the current setup. Additionally, the vast majority of changes people are making use copyright assets, which wouldn’t be legal on the workshop anyways.

  7. Any chance you guys can fix the broken links to the images on this page? This is a really informative post and I’d hate to see it become lost to time.

    (Luckily, someone already archived the page here (images and all):

    https://archive.is/9VGex

    so if you guys have lost the original images, you can get them back from there).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.