VSeeFace is a free, highly configurable face and hand tracking VRM and VSFAvatar avatar puppeteering program for virtual youtubers with a focus on robust tracking and high image quality. VSeeFace offers functionality similar to Luppet, 3tene, Wakaru and similar programs. VSeeFace runs on Windows 8 and above (64 bit only). VSeeFace can send, receive and combine tracking data using the VMC protocol, which also allows iPhone perfect sync support through Waidayo like this.
Face tracking, including eye gaze, blink, eyebrow and mouth tracking, is done through a regular webcam. For the optional hand tracking, a Leap Motion device is required. You can see a comparison of the face tracking performance compared to other popular vtuber applications here. In this comparison, VSeeFace is still listed under its former name OpenSeeFaceDemo.
Running four face tracking programs (OpenSeeFaceDemo, Luppet, Wakaru, Hitogata) at once with the same camera input. 😊 pic.twitter.com/ioO2pofpMx— Emiliana (@emiliana_vt) June 23, 2020
Please note that Live2D models are not supported. For those, please check out VTube Studio or PrprLive.
To update VSeeFace, just delete the old folder or overwrite it when unpacking the new version.
Old versions can be found in the release archive here. This website, the #vseeface-updates channel on Deat’s discord and the release archive are the only official download locations for VSeeFace.
I post news about new versions and the development process on Twitter with the
#VSeeFace hashtag. Feel free to also use this hashtag for anything VSeeFace related. Starting with 1.13.26, VSeeFace will also check for updates and display a green message in the upper left corner when a new version is available, so please make sure to update if you are still on an older version.
The reason it is currently only released in this way, is to make sure that everybody who tries it out has an easy channel to give me feedback.
VSeeFaceはクロマキーで録画が出来ないけどOBSのGame CaptureでAllow transparencyをチェックしてVSeeFaceで右下の※ボタンでUIを見えないにすれば綺麗な透明の背景になります。
You can use VSeeFace to stream or do pretty much anything you like, including non-commercial and commercial uses. Just don’t modify it (other than the translation
json files) or claim you made it.
VSeeFace is beta software. There may be bugs and new versions may change things around. It is offered without any kind of warrenty, so use it at your own risk. It should generally work fine, but it may be a good idea to keep the previous version around when updating.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Starting with VSeeFace v1.13.36, a new Unity asset bundle and VRM based avatar format called VSFAvatar is supported by VSeeFace. This format allows various Unity functionality such as custom animations, shaders and various other components like dynamic bones, constraints and even window captures to be added to VRM models. This is done by re-importing the VRM into Unity and adding and changing various things. To learn more about it, you can watch this tutorial by @Virtual_Deat, who worked hard to bring this new feature about!
A README file with various important information is included in the SDK, but you can also read it here.
Make sure to set the Unity project to linear color space.
You can watch how the two included sample models were set up here.
There are a lot of tutorial videos out there. This section lists a few to help you get started, but it is by no means comprehensive. Make sure to look around!
This section is still a work in progress. For help with common issues, please refer to the troubleshooting section.
The most important information can be found by reading through the help screen as well as the usage notes inside the program.
You can rotate, zoom and move the camera by holding the Alt key and using the different mouse buttons. The exact controls are given on the help screen.
Once you’ve found a camera position you like and would like for it to be the initial camera position, you can set the default camera setting in the
General settings to
Custom. You can now move the camera into the desired position and press
Save next to it, to save a custom camera position. Please note that these custom camera positions to not adapt to avatar size, while the regular default positions do.
VSeeFace does not support chroma keying. Instead, capture it in OBS using a game capture and enable the
Allow transparency option on it. Once you press the tiny ※ button in the lower right corner, the UI will become hidden and the background will turn transparent in OBS. You can hide and show the ※ button using the space key.
You can set up the virtual camera function, load a background image and do a Discord (or similar) call using the virtual VSeeFace camera.
You can hide and show the ※ button using the space key.
Those bars are there to let you know that you are close to the edge of your webcam’s field of view and should stop moving that way, so you don’t lose tracking due to being out of sight. If you have set the UI to be hidden using the ※ button in the lower right corner, blue bars will still appear, but they will be invisible in OBS as long as you are using a
Game Capture with
Allow transparency enabled.
Yes, unless you are using the
Toaster quality level or have enabled
Synthetic gaze which makes the eyes follow the head movement, similar to what Luppet does. You can try increasing the gaze strength and sensitivity to make it more visible.
If humanoid eye bones are assigned in Unity, VSeeFace will directly use these for gaze tracking. The gaze strength determines how far the eyes will move. To use the VRM blendshape presets for gaze tracking, make sure that no eye bones are assigned. The synthetic gaze, which moves the eyes either according to head movement or so that they look at the camera, uses the
VRMLookAtBoneApplyer or the
VRMLookAtBlendShapeApplyer, depending on what exists on the model.
Make sure the gaze offset sliders are centered. They can be used to correct the gaze for avatars that don’t have centered irises, but they can also make things look quite wrong when set up incorrectly.
Make sure your eyebrow offset slider is centered. It can be used to overall shift the eyebrow position, but if moved all the way, it leaves little room for them to move.
First, hold the alt key and right click to zoom out until you can see the Leap Motion model in the scene. Then use the sliders to adjust the model’s position to match its location relative to yourself in the real world. You can refer to this video to see how the sliders work.
Changing the position also changes the height of the Leap Motion in VSeeFace, so just pull the Leap Motion position’s height slider way down. Zooming out may also help.
To fix this error, please install the V4 (Orion) SDK. It says it’s used for VR, but it is also used by desktop applications.
All configurable hotkeys also work while it is in the background or minimized, so the expression hotkeys, the audio lipsync toggle hotkey and the configurable position reset hotkey all work from any other program as well. On some systems it might be necessary to run VSeeFace as admin to get this to work properly for some reason.
In at least one case, the following setting has apparently fixed this: Windows => Graphics Settings => Change default graphics settings => Disable “Hardware-accelerated GPU scheduling”. In another case, setting VSeeFace to realtime priority seems to have helped.
Try switching the camera settings from
Camera defaults to something else. The camera might be using an unsupported video format by default.
Follow the official guide. The important thing to note is that it is a two step process. First, you export a base VRM file, which you then import back into Unity to configure things like blend shape clips. After that, you export the final VRM. If you look around, there are probably other resources out there too.
Yes, you can do so using UniVRM and Unity. You can find a tutorial here. Once the additional VRM blend shape clips are added to the model, you can assign a hotkey in the
Expression settings to trigger it. The expression detection functionality is limited to the predefined expressions, but you can also modify those in Unity and, for example, use the
Joy expression slot for something else.
This is most likely caused by not properly normalizing the model during the first VRM conversion. To properly normalize the avatar during the first VRM export, make sure that
Pose Freeze and
Force T Pose is ticked on the
ExportSettings tab of the VRM export dialog. I also recommend making sure that no jaw bone is set in Unity’s humanoid avatar configuration before the first export, since often a hair bone gets assigned by Unity as a jaw bone by mistake. If a jaw bone is set in the head section, click on it and unset it using the backspace key on your keyboard. If your model does have a jaw bone that you want to use, make sure it is correctly assigned instead.
Note that re-exporting a VRM will not work to for properly normalizing the model. Instead the original model (usually FBX) has to be exported with the correct options set.
If you have the fixed hips option enabled in the advanced option, try turning it off. If this helps, you can try the option to disable vertical head movement for a similar effect. If it doesn’t help, try turning up the smoothing, make sure that your room is brightly lit and try different camera settings.
Make sure to set “Blendshape Normals” to “None” on the FBX when you import it into Unity and before you export your VRM. That should prevent this issue.
You can add two custom VRM blend shape clips called “Brows up” and “Brows down” and they will be used for the eyebrow tracking. You can also add them on VRoid and Cecil Henshin models to customize how the eyebrow tracking looks. Also refer to the special blendshapes section.
I would recommend running VSeeFace on the PC that does the capturing, so it can be captured with proper transparency. The actual face tracking could be offloaded using the network tracking functionality to reduce CPU usage. If this is really not an option, please refer to the release notes of v1.13.34o. The
settings.ini can be found as described here.
The screenshots are saved to a folder called
VSeeFace inside your
Pictures folder. You can make a screenshot by pressing
S or a delayed screenshot by pressing
VRM conversion is a two step process. After the first export, you have to put the VRM file back into your Unity project to actually set up the VRM blend shape clips and other things. You can follow the guide on the VRM website, which is very detailed with many screenshots.
Because I don’t want to pay a high yearly fee for a code signing certificate.
N versions of Windows are missing some multimedia features. First make sure your Windows is updated and then install the media feature pack.
Right click it, select
Extract All... and press next. You should have a new folder called VSeeFace. Inside there should be a file called
VSeeFace with a blue icon, like the logo on this site. Double click on that to run VSeeFace. There’s a video here.
If Windows 10 won’t run the file and complains that the file may be a threat because it is not signed, you can try the following: Right click it -> Properties -> Unblock -> Apply or select exe file -> Select More Info -> Run Anyways
No. Although, if you are very experienced with Linux and wine as well, you can try following these instructions for running it on Linux.
It’s reportedly possible to run it using wine.
No, VSeeFace only supports 3D models in VRM format. While there are free tiers for Live2D integration licenses, adding Live2D support to VSeeFace would only make sense if people could load their own models. In that case, it would be classified as an “Expandable Application”, which needs a different type of license, for which there is no free tier. As VSeeFace is a free program, integrating an SDK that requires the payment of licensing fees is not an option.
No, VSeeFace cannot use the Tobii eye tracker SDK due to its its licensing terms.
You can enable the virtual camera in VSeeFace, set a single colored background image and add the VSeeFace camera as a source, then going to the color tab and enabling a chroma key with the color corresponding to the background image. Note that this may not give as clean results as capturing in OBS with proper alpha transparency.
Please note that the camera needs to be reenabled every time you start VSeeFace unless the option to keep it enabled is enabled. This option can be found in the advanced settings section.
The virtual camera can be used to use VSeeFace for teleconferences, Discord calls and similar. It can also be used in situations where using a game capture is not possible or very slow, due to specific laptop hardware setups.
To use the virtual camera, you have to enable it in the
General settings. For performance reasons, it is disabled again after closing the program. Starting with version 1.13.27, the virtual camera will always provide a clean (no UI) image, even while the UI of VSeeFace is not hidden using the small ※ button in the lower right corner.
When using it for the first time, you first have to install the camera driver by clicking the installation button in the virtual camera section of the
General settings. This should open an UAC prompt asking for permission to make changes to your computer, which is required to set up the virtual camera. If no such prompt appears and the installation fails, starting VSeeFace with administrator permissions may fix this, but it is not generally recommended. After a successful installation, the button will change to an uninstall button that allows you to remove the virtual camera from your system.
After installation, it should appear as a regular webcam. The virtual camera only supports the resolution 1280x720. Changing the window size will most likely lead to undesirable results, so it is recommended that the
Allow window resizing option be disabled while using the virtual camera.
The virtual camera supports loading background images, which can be useful for vtuber collabs over discord calls, by setting a unicolored background.
Should you encounter strange issues with with the virtual camera and have previously used it with a version of VSeeFace earlier than 1.13.22, please try uninstalling it using the
UninstallAll.bat, which can be found in
VSeeFace_Data\StreamingAssets\UnityCapture. If the camera outputs a strange green/yellow pattern, please do this as well.
If supported by the capture program, the virtual camera can be used to output video with alpha transparency. To make use of this, a fully transparent PNG needs to be loaded as the background image. Starting with version 1.13.25, such an image can be found in
VSeeFace_Data\StreamingAssets. Partially transparent backgrounds are supported as well. Please note that using (partially) transparent background images with a capture program that do not support RGBA webcams can lead to color errors. OBS and Streamlabs OBS support ARGB video camera capture, but require some additional setup. Apparently, the Twitch video capturing app supports it by default.
To setup OBS or Streamlabs OBS to capture video from the virtual camera with transparency, please follow these settings. The important settings are:
As the virtual camera keeps running even while the UI is shown, using it instead of a game capture can be useful if you often make changes to settings during a stream.
It is possible to perform the face tracking on a separate PC. This can, for example, help reduce CPU load. This process is a bit advanced and requires some general knowledge about the use of commandline programs and batch files. To do this, copy either the whole VSeeFace folder or the
VSeeFace_Data\StreamingAssets\Binary\ folder to the second PC, which should have the camera attached. Inside this folder is a file called
run.bat. Running this file will open first ask for some information to set up the camera and then run the tracker process that is usually run in the background of VSeeFace. If you entered the correct information, it will show an image of the camera feed with overlaid tracking points, so do not run it while streaming your desktop. This can also be useful to figure out issues with the camera or tracking in general. The tracker can be stopped with the
q, while the image display window is active.
In the following, the PC running VSeeFace will be called PC A, and the PC running the face tracker will be called PC B.
To use it for network tracking, edit the
run.bat file or create a new batch file with the following content:
%ECHO OFF facetracker -l 1 echo Make sure that nothing is accessing your camera before you proceed. set /p cameraNum=Select your camera from the list above and enter the corresponding number: facetracker -a %cameraNum% set /p dcaps=Select your camera mode or -1 for default settings: set /p fps=Select the FPS: set /p ip=Enter the LAN IP of the PC running VSeeFace: facetracker -c %cameraNum% -F %fps% -D %dcaps% -v 3 -P 1 -i %ip% --discard-after 0 --scan-every 0 --no-3d-adapt 1 --max-feature-updates 900 pause
If you would like to disable the webcam image display, you can change
-v 3 to
When starting this modified file, in addition to the camera information, you will also have to enter the local network IP address of the PC A. You can start and stop the tracker process on PC B and VSeeFace on PC A independently. When no tracker process is running, the avatar in VSeeFace will simply not move.
On the VSeeFace side, select
[Network tracking] in the camera dropdown menu of the starting screen. Also, enter this PC’s (PC A) local network IP address in the
Listen IP field. Do not enter the IP address of PC B or it will not work. Press the start button. PC A should now be able to receive tracking data from PC B, while the tracker is running on PC B. You can find PC A’s local network IP address by enabling the VMC protocol receiver in the
General settings and clicking on
Show LAN IP.
If you are sure that the camera number will not change and know a bit about batch files, you can also modify the batch file to remove the interactive input and just hard code the values.
If things don’t work as expected, check the following things:
run.batshould open a window with black background and grey text. Make sure you entered the necessary information and pressed enter.
Took 20msat the beginning should appear. While a face is in the view of the camera, lines with
Confidenceshould appear too. A second window should show the camera view and red and yellow tracking points overlaid on the face. If this is not the case, something is wrong on this side of the process.
VSeeFace has special support for certain custom VRM blend shape clips:
Surprisedis supported by the simple and experimental expression detection features.
Brows downwill be used for eyebrow tracking if present on a model.
Oare mapped to
Oshapes by itself, so setting up custom VRM blend shape clips would be unnecessary effort. In this case it is better to have only the standard
OVRM blend shape clips on the model.
You can set up VSeeFace to recognize your facial expressions and automatically trigger VRM blendshape clips in response. There are two different modes that can be selected in the
This mode is easy to use, but it is limited to the
Surprised expressions. Simply enable it and it should work. There are two sliders at the bottom of the
General settings that can be used to adjust how it works.
To trigger the
Fun expression, smile, moving the corners of your mouth upwards. To trigger the
Angry expression, do not smile and move your eyebrows down. To trigger the
Surprised expression, move your eyebrows up.
This mode supports the
Surprised VRM expressions. To use it, you first have to teach the program how your face will look for each expression, which can be tricky and take a bit of time. What kind of face you make for each of them is completely up to you, but it’s usually a good idea to enable the tracking point display in the
General settings, so you can see how well the tracking can recognize the face you are making. The following video will explain the process:
Calibrate button is pressed, most of the recorded data is used to train a detection system. The rest of the data will be used to verify the accuracy. This will result in a number between 0 (everything was misdetected) and 1 (everything was detected correctly) and is displayed above the calibration button. A good rule of thumb is to aim for a value between 0.95 and 0.98. While this might be unexpected, a value of 1 or very close to 1 is not actually a good thing and usually indicates that you need to record more data. A value significantly below 0.95 indicates that, most likely, some mixup occurred during recording (e.g. your sorrow expression was recorded for your surprised expression). If this happens, either reload your last saved calibration or restart from the beginning.
It is also possible to set up only a few of the possible expressions. This usually improves detection accuracy. However, make sure to always set up the
Neutral expression. This expression should contain any kind of expression that should not as one of the other expressions. To remove an already set up expression, press the corresponding
Clear button and then
Having an expression detection setup loaded can increase the startup time of VSeeFace even if expression detection is disabled or set to simple mode. To avoid this, press the
Clear calibration button, which will clear out all calibration data and preventing it from being loaded at startup. You can always load your detection setup again using the
Load calibration button.
VSeeFace both supports sending and receiving motion data (humanoid bone rotations, root offset, blendshape values) using the VMC protocol introduced by Virtual Motion Capture. If both sending and receiving are enabled, sending will be done after received data has been applied. In this case, make sure that VSeeFace is not sending data to itself, i.e. the ports for sending and receiving are different, otherwise very strange things may happen.
When receiving motion data, VSeeFace can additionally perform its own tracking and apply it.
Track face features will apply blendshapes, eye bone and jaw bone rotations according to VSeeFace’s tracking. If only
Track fingers and
Track hands to shoulders are enabled, the Leap Motion tracking will be applied, but camera tracking will remain disabled. If any of the other options are enabled, camera based tracking will be enabled and the selected parts of it will be applied to the avatar.
Please note that received blendshape data will not be used for expression detection and that, if received blendshapes are applied to a model, triggering expressions via hotkeys will not work.
You can find a list of applications with support for the VMC protocol here.
Using the prepared Unity project and scene, pose data will be sent over VMC protocol while the scene is being played. If an animator is added to the model in the scene, the animation will be transmitted, otherwise it can be posed manually as well. For best results, it is recommended to use the same models in both VSeeFace and the Unity scene.
Certain iPhone apps like Waidayo can send perfect sync blendshape information over the VMC protocol, which VSeeFace can receive, allowing you to use iPhone based face tracking. This requires an especially prepared avatar containing the necessary blendshapes. A list of these blendshapes can be found here. You can find an example avatar containing the necessary blendshapes here. An easy, but not free, way to apply these blendshapes to VRoid avatars is to use HANA Tool. It is also possible to use VSeeFace with iFacialMocap through iFacialMocap2VMC.
To combine iPhone tracking with Leap Motion tracking, enable the
Track fingers and
Track hands to shoulders options in VMC reception settings in VSeeFace. Enabling all over options except
Track face features as well, will apply the usual head tracking and body movements, which may allow more freedom of movement than just the iPhone tracking on its own.
Send Motion IP Addressto your PC’s LAN IP address. You can find it by clicking on
Show LAN IPat the beginning of the VMC protocol receiver settings in VSeeFace.
If VSeeFace’s tracking should be disabled to reduce CPU usage, only enable “Track fingers” and “Track hands to shoulders” on the VMC protocol receiver. This should lead to VSeeFace’s tracking being disabled while leaving the Leap Motion operable. If the tracking remains on, this may be caused by expression detection being enabled. In this case, additionally set the expression detection setting to none.
A full Japanese guide can be found here. The following gives a short English language summary. To use HANA Tool to add perfect sync blendshapes to a VRoid model, you need to install Unity, create a new project and add the UniVRM package and then the VRM version of the HANA Tool package to your project. You can do this by dragging in the
.unitypackage files into the file section of the Unity project. Next, make sure that your VRoid VRM is exported from VRoid v0.12 (or whatever is supported by your version of HANA_Tool) without optimizing or decimating the mesh. Create a folder for your model in the
Assets folder of your Unity project and copy in the VRM file. It should now get imported.
HANA_Toolmenu at the top, select
Reader. A new window should appear. Drag the
Faceobject into the
SkinnedMeshRendererslot at the top of the new window. Select the VRoid version and type of your model. Make sure to select
Addat the bottom, then click
Read BlendShapes. (Screenshot)
HANA_Toolmenu at the top, select
ClipBuilder. A new window should appear. Drag the model from the hierarchy into the slot at the top and run it. For older versions than v2.9.5b, select
AddBlendShapeClip. A new window should appear. Drag the model from the hierarchy into the
VRMBlendShapeProxyslot at the top of the new window. Again, drag the
Faceobject into the
SkinnedMeshRendererslot underneath. Select your model type, not
Extraand press the button at the bottom. (Screenshot)
Scenetab and select your model in the hierarchy. Scroll down in the inspector until you see a list of blend shapes. You should be able to move the sliders and see the face of your model change. Below the regular VRM and VRoid blendshapes, there should now be a bit more than 50 additional blendshapes for perfect sync use, such as one to puff your cheeks. (Screenshot)
Export humanoid. All the necessary details should already be filled in, so you can press export to save your new VRM file. (Screenshot)
It is possible to stream Perception Neuron motion capture data into VSeeFace by using the VMC protocol. To do so, load this project into Unity 2019.4.16f1 and load the included scene in the
Scenes folder. Create a new folder for your VRM avatar inside the
Avatars folder and put in the VRM file. Unity should import it automatically. You can then delete the included Vita model from the the scene and add your own avatar by dragging it into the
Hierarchy section on the left.
You can now start the Neuron software and set it up for transmitting BVH data on port 7001. Once this is done, press play in Unity to play the scene. If no red text appears, the avatar should have been set up correctly and should be receiving tracking data from the Neuron software, while also sending the tracking data over VMC protocol.
Next, you can start VSeeFace and set up the VMC receiver according to the port listed in the message displayed in the game view of the running Unity scene. Once enabled, it should start applying the motion tracking data from the Neuron to the avatar in VSeeFace.
The provided project includes NeuronAnimator by Keijiro Takahashi and uses it to receive the tracking data from the Perception Neuron software and apply it to the avatar.
ThreeDPoseTracker allows webcam based full body tracking. While the ThreeDPoseTracker application can be used freely for non-commercial and commercial uses, the source code is for non-commercial use only.
It allows transmitting its pose data using the VMC protocol, so by enabling VMC receiving in VSeeFace, you can use its webcam based fully body tracking to animate your avatar. From what I saw, it is set up in such a way that the avatar will face away from the camera in VSeeFace, so you will most likely have to turn the lights and camera around. By enabling the
Track face features option, you can apply VSeeFace’s face tracking to the avatar.
If you are working on an avatar, it can be useful to get an accurate idea of how it will look in VSeeFace before exporting the VRM. You can load this example project into Unity 2019.4.16f1 and load the included preview scene to preview your model with VSeeFace like lighting settings. This project also allows posing an avatar and sending the pose to VSeeFace using the VMC protocol starting with VSeeFace v1.13.34b.
After loading the project in Unity, load the provided scene inside the Scenes folder. If you press play, it should show some instructions on how to use it.
If you prefer settings things up yourself, the following settings in Unity should allow you to get an accurate idea of how the avatar will look with default settings in VSeeFace:
Edit -> Project Settings... -> Player -> Other Settings -> Color Space: Linear
Edit -> Project Settings... -> Quality, select
Ultraand set the anti-aliasing to 8x
If you enabled shadows in the VSeeFace light settings, set the shadow type on the directional light to soft.
To see the model with better light and shadow quality, use the
Game view. You can align the camera with the current scene view by pressing
Ctrl+Shift+F or using
Game Object -> Align with view from the menu.
It is possible to translate VSeeFace into different languages and I am happy to add contributed translations! To add a new language, first make a new entry in
VSeeFace_Data\StreamingAssets\Strings\Languages.json with a new language code and the name of the language in that language. The language code should usually be given in two lowercase letters, but can be longer in special cases. For a partial reference of language codes, you can refer to this list. Afterwards, make a copy of
VSeeFace_Data\StreamingAssets\Strings\en.json and rename it to match the language code of the new language. Now you can edit this new file and translate the
"text" parts of each entry into your language. The
"comment" might help you find where the text is used, so you can more easily understand the context, but it otherwise doesn’t matter.
New languages should automatically appear in the language selection menu in VSeeFace, so you can check how your translation looks inside the program. Note that a JSON syntax error might lead to your whole file not loading correctly. In this case, you may be able to find the position of the error, by looking into the
Player.log, which can be found by using the button all the way at the bottom of the general settings.
Generally, your translation has to be enclosed by doublequotes
"like this". If double quotes occur in your text, put a \ in front, for example
"like \"this\"". Line breaks can be written as
Some people have gotten VSeeFace to run on Linux through wine and it might be possible on Mac as well, but nobody tried, to my knowledge. However, reading webcams is not possible through wine versions before 6. Starting with wine 6, you can try just using it normally.
For previous versions or if webcam reading does not work properly, as a workaround, you can set the camera in VSeeFace to
[Network tracking] and run the
facetracker.py script from OpenSeeFace manually. To do this, you will need a Python 3.7 or newer installation. To set up everything for the
facetracker.py, you can try something like this on Debian based distributions:
sudo apt-get install python3 python3-pip python3-virtualenv git git clone https://github.com/emilianavt/OpenSeeFace cd OpenSeeFace virtualenv -p python3 env source env/bin/activate pip3 install onnxruntime opencv-python pillow numpy
To run the tracker, first enter the
OpenSeeFace directory and activate the virtual environment for the current session:
Then you can run the tracker:
python facetracker.py -c 0 -W 1280 -H 720 --discard-after 0 --scan-every 0 --no-3d-adapt 1 --max-feature-updates 900
Running this command, will send the tracking data to a UDP port on localhost, on which VSeeFace will listen to receive the tracking data. The
-c argument specifies which camera should be used, with the first being
-H let you specify the resolution. To see the webcam image with tracking points overlaid on your face, you can add the arguments
-v 3 -P 1 somewhere.
Notes on running wine: First make sure you have the Arial font installed. You can put
Arial.ttf in your wine prefix’s
C:\Windows\Fonts folder and it should work. Secondly, make sure you have the 64bit version of wine installed. It often comes in a package called
wine64. Also make sure that you are using a 64bit wine prefix. After installing
wine64, you can set one up using
WINEARCH=win64 WINEPREFIX=~/.wine64 wine whatever, then unzip VSeeFace in
~/.wine64/drive_c/VSeeFace and run it with
WINEARCH=win64 WINEPREFIX=~/.wine64 wine VSeeFace.exe.
Starting with VSeeFace v1.13.33f, while running under wine
--background-color '#00FF00' can be used to set a window background color. To disable wine mode and make things work like on Windows,
--disable-wine-mode can be used.
This section lists common issues and possible solutions for them.
If the VSeeFace window remains black when starting and you have an AMD graphics card, please try disabling
Radeon Image Sharpening either globally or for VSeeFace. It reportedly can cause this type of issue.
If an error appears after pressing the
Start button, please confirm that the VSeeFace folder is correctly unpacked. Previous causes have included:
VSeeFace_Data\StreamingAssets\Binary\facetracker.exe, which is necessary for the correct operation of VSeeFace. Please confirm that this file exists and, if not, check whether it has been removed by anti virus software.
If no window with a graphical user interface appears, please confirm that you have downloaded VSeeFace and not OpenSeeFace, which is just a backend library.
If you get an error message that the tracker process has disappeared, first try to follow the suggestions given in the error. If none of them help, press the
Open logs button. If an error like the following:
File "cv2__init__.py", line 3, in <module> ImportError: DLL load failed: %1 is not a valid Win32 application.
appears near the end of the
error.txt that should have opened, you probably have an N edition of Windows. These Windows N editions mostly distributed in Europe are missing some necessary multimedia libraries. Follow these steps to install them.
If tracking doesn’t work, you can actually test what the camera sees by running the
run.bat in the
VSeeFace_Data\StreamingAssets\Binary folder. Before running it, make sure that no other program, including VSeeFace, is using the camera. After starting it, you will first see a list of cameras, each with a number in front of it. Enter the number of the camera you would like to check and press enter. Next, it will ask you to select your camera settings as well as a frame rate. You can enter -1 to use the camera defaults and 24 as the frame rate. Press enter after entering each value. After this, a second window should open, showing the image captured by your camera. If your face is visible on the image, you should see red and yellow tracking dots marked on your face. You can use this to make sure your camera is working as expected, your room has enough light, there is no strong light from the background messing up the image and so on. If the tracking points accurately track your face, the tracking should work in VSeeFace as well. To close the window, either press
q in the window showing the camera image or press Ctrl+C in the console window.
If you would like to see the camera image while your avatar is being animated, you can start VSeeFace while
run.bat is running and select
[Network tracking] in the camera option. It should receive the tracking data from the active
If an error message about the tracker process appears, it may be necessary to restart the program and, on the first screen of the program, enter a different camera resolution and/or frame rate that is known to be supported by the camera. To figure out a good combination, you can try adding your webcam as a video source in OBS and play with the parameters (resolution and frame rate) to find something that works.
Should the tracking still not work, one possible workaround is to capture the actual webcam using OBS and then re-export it as a camera using OBS-VirtualCam.
If tracking randomly stops and you are using Streamlabs OBS, you could see if it works properly with regular OBS. Another issue could be that Windows is putting the webcam’s USB port to sleep. You can disable this behaviour as follow:
Device Managerand open it
Universal Serial Bus Controllers
USB Root Huband select
Allow the computer to turn off this device to save powerand click OK
Alternatively or in addition, you can try the following approach:
Power Optionsand open them
Change plan settingsfor your currently selected plan
Change advcanced power settings
+in front of
+in front of
USB selective suspend setting
Disabledand click OK
Please note that this is not a guaranteed fix by far, but it might help. If you are using a laptop where battery life is important, I recommend only following the second set of steps and setting them up for a power plan that is only active while the laptop is charging.
If, after installing it from the
General settings, the virtual camera is still not listed as a webcam under the name
VSeeFaceCamera in other programs or if it displays an odd green and yellow pattern while VSeeFace is not running, run the
UninstallAll.bat inside the folder
VSeeFace_Data\StreamingAssets\UnityCapture as administrator. Afterwards, run the
Install.bat inside the same folder as administrator. After installing the virtual camera in this way, it may be necessary to restart other programs like Discord before they recognize the virtual camera.
If the virtual camera is listed, but only shows a black picture, make sure that VSeeFace is running and that the virtual camera is enabled in the
General settings. It automatically disables itself when closing VSeeFace to reduce its performance impact, so it has to be manually re-enabled the next time it is used.
As a quick fix, disable eye/mouth tracking in the expression settings in VSeeFace. For a better fix of the mouth issue, edit your expression in VRoid Studio to not open the mouth quite as far. You can also edit your model in Unity.
VRM models need their blendshapes to be registered as VRM blend shape clips on the VRM Blend Shape Proxy.
There are sometimes issues with blend shapes not being exported correctly by UniVRM. Reimport your VRM into Unity and check that your blendshapes are there. Make sure your scene is not playing while you add the blend shape clips. Also, make sure to press Ctrl+S to save each time you add a blend shape clip to the blend shape avatar.
This is usually caused by the model not being in the correct pose when being first exported to VRM. Please try posing it correctly and exporting it from the original model file again. Sometimes using the T-pose option in UniVRM is enough to fix it. Note that fixing the pose on a VRM file and reexporting that will only lead to further issues, it the pose needs to be corrected on the original model. The T pose needs to follow these specifications:
Make sure to use a recent version of UniVRM (0.66). With VSFAvatar, the shader version from your project is included in the model file. Older versions of MToon had some issues with transparency, which are fixed in recent versions.
Using the same blendshapes in multiple blend shape clips or animations can cause issues. While in theory, reusing it in multiple blend shape clips should be fine, a blendshape that is used in both an animation and a blend shape clip will not work in the animation, because it will be overridden by the blend shape clip after being applied by the animation.
First, make sure you have your microphone selected on the starting screen. You can also change it in the
General settings. Also make sure that the
Mouth size reduction slider in the
General settings is not turned up.
If you change your audio output device in Windows, the lipsync function may stop working. If this happens, it should be possible to get it working again by changing the selected microphone in the
General settings or toggling the lipsync option off and on.
Lipsync and mouth animation relies on the model having VRM blendshape clips for the A, I, U, E, O mouth shapes. Jaw bones are not supported and known to cause trouble during VRM export, so it is recommended to unassign them from Unity’s humanoid avatar configuration if present.
If a stereo audio device is used for recording, please make sure that the voice data is on the left channel. If the voice is only on the right channel, it will not be detected. In this case, software like Equalizer APO or Voicemeeter can be used to respectively either copy the right channel to the left channel or provide a mono device that can be used as a mic in VSeeFace. In my experience Equalizer APO can work with less delay and is more stable, but harder to set up.
If no microphones are displayed in the list, please check the
Player.log in the log folder. Look for
FMOD errors. They might list some information on how to fix the issue. This thread on the Unity forums might contain helpful information.
In one case, having a microphone with a 192kHz sample rate installed on the system could make lip sync fail, even when using a different microphone. In this case setting it to 48kHz allowed lip sync to work.
This is usually caused by laptops where OBS runs on the integrated graphics chip, while VSeeFace runs on a separate discrete one. Enabling the
SLI/Crossfire Capture Mode option may enable it to work, but is usually slow. Further information can be found here.
In one case, Streamlabs OBS could only capture VSeeFace when both Streamlabs OBS and VSeeFace where running with admin privileges, which is very odd and should not usually happen, but if you can’t get the game capture to work, you could give it a try.
Another workaround is to use the virtual camera with a fully transparent background image and an ARGB video capture source, as described above.
The VSeeFace settings are not stored within the VSeeFace folder, so you can easily delete it or overwrite it when a new version comes around. If you wish to access the settings file or any of the log files produced by VSeeFace, starting with version 1.13.32g, you can click the
Show log and settings folder button at the bottom of the
General settings. Otherwise, you can find them as follows:
The settings file is called
settings.ini. If you performed a factory reset, the settings before the last factory reset can be found in a file called
settings.factoryreset. There are also some other files in this directory:
avatarList.ini: Starting with VSeeFace 1.13.25, this file contains the list of VRM files listed in the avatar switcher.
error.txt: This contains error output from the
output.txt: This contains additional output from the
Player.log: This contains the Unity player log of VSeeFace.
Player-prev.log: This contains the Unity player log of VSeeFace from the previous run.
This section contains some suggestions on how you can improve the performance of VSeeFace.
If VSeeFace becomes laggy while the window is in the background, you can try enabling the increased priority option from the
General settings, but this can impact the responsiveness of other programs running at the same time.
CPU usage is mainly caused by the separate face tracking process
facetracker.exe that runs alongside VSeeFace.
The first thing to try for performance tuning should be the
Recommend Settings button on the starting screen, which will run a system benchmark to adjust tracking quality and webcam frame rate automatically to a level that balances CPU usage with quality. This usually provides a reasonable starting point that you can adjust further to your needs.
One way to slightly reduce the face tracking process’s CPU usage is to turn on the synthetic gaze option in the
General settings which will cause the tracking process to skip running the gaze tracking model starting with version 1.13.31.
There are two other ways to reduce the amount of CPU used by the tracker. The first and most recommended way is to reduce the webcam frame rate on the starting screen of VSeeFace. Tracking at a frame rate of 15 should still give acceptable results. VSeeFace interpolates between tracking frames, so even low frame rates like 15 or 10 frames per second might look acceptable. The webcam resolution has almost no impact on CPU usage.
The tracking rate is the TR value given in the lower right corner. Please note that the tracking rate may already be lower than the webcam framerate entered on the starting screen. This can be either caused by the webcam slowing down due to insufficient lighting or hardware limitations, or because the CPU cannot keep up with the face tracking. Lowering the webcam frame rate on the starting screen will only lower CPU usage if it is set below the current tracking rate.
The second way is to use a lower quality tracking model. The tracking models can also be selected on the starting screen of VSeeFace. Please note you might not see a change in CPU usage, even if you reduce the tracking quality, if the tracking still runs slower than the webcam’s frame rate. For this reason, it is recommended to first reduce the frame rate until you can observe a reduction in CPU usage. At that point, you can reduce the tracking quality to further reduce CPU usage.
Here is a list of the different models:
High quality: The default model with the best tracking and highest CPU utilization.
Medium quality: Slightly faster and slightly worse tracking quality.
Barely okay quality: Noticably faster than the first two models, but also noticably worse tracking. The worse tracking mainly results in worse eye blink and eyebrow tracking, as well as highly reduced expression detection performance. I recommend using auto blinking with this and the
Low quality: Slightly faster and noticably worse tracking quality.
Toaster: This model is specifically intended for old PCs and is much faster than all the others, but it also offers noticably lower tracking quality. Eye blink and gaze tracking as well as expression detection are disabled when using this model.
Certain models with a high number of meshes in them can cause significant slowdown. Starting with 1.23.25c, there is an option in the
Advanced section of the
General settings called
Disable updates. By turning on this option, this slowdown can be mostly prevented. However, while this option is enabled, parts of the avatar may disappear when looked at from certain angles. Only enable it when necessary.
In some cases it has been found that enabling this option and disabling it again mostly eliminates the slowdown as well, so give that a try if you encounter this issue. This should prevent any issues with disappearing avatar parts. However, in this case, enabling and disabling the checkbox has to be done each time after loading the model.
GPU usage is mainly dictated by frame rate and anti-aliasing. These options can be found in the
If you find GPU usage is too high, first ensure that you do not have anti-aliasing set to
Really nice, because it can cause very heavy CPU load. Next, make sure that all effects in the effect settings are disabled. If it is still too high, make sure to disable the virtual camera and improved anti-aliasing. Finally, you can try reducing the regular anti-aliasing setting or reducing the framerate cap from 60 to something lower like 30 or 24.
Generally, rendering a single character should not be very hard on the GPU, but model optimization may still make a difference. You can use this cube model to test how much of your GPU utilization is related to the model. A model exported straight from VRoid with the hair meshes combined will probably still have a separate material for each strand of hair. Combined with the multiple passes of the MToon shader, this can easily lead to a few hundred draw calls, which are somewhat expensive. Merging materials and atlassing textures in Blender, then converting the model back to VRM in Unity can easily reduce the number of draw calls from a few hundred to around ten.
Some people with Nvidia GPUs who reported strange spikes in GPU load found that the issue went away after setting
Prefer max performance in the Nvidia power management settings and setting
Texture Filtering - Quality to
High performance in the Nvidia settings.
A surprising number of people have asked if it’s possible to support the development of VSeeFace, so I figured I’d add this section.
I don’t really accept monetary donations, but getting fanart, you can find a reference here, makes me really, really happy and getting vtuber gift subs on Twitch is nice too, because it both helps the community and I get some cute emotes to use as well.
You really don’t have to at all, but if you really, really insist and happen to have Monero (XMR), you can send something to: 8AWmb7CTB6sMhvW4FVq6zh1yo7LeJdtGmR7tyofkcHYhPstQGaKEDpv1W2u1wokFGr7Q9RtbWXBmJZh7gAy6ouDDVqDev2t