A recently issued patent from Sony outlines a system where machine learning would adjust player inputs and viewpoints to enhance the behavior and perspective of non-playable characters (NPCs) in video games. This innovative approach seeks to refine the current method of relying solely on AI-controlled NPCs derived from recorded human play sessions. In this new system, NPCs would be influenced by data from both a recorded session and the real-time actions of the player-controlled character. Instead of restricting NPC movements and perspectives solely to developer input data, the machine learning system aims to enhance NPCs by incorporating live movement and perspective data from the player-controlled character. While previous Sony patents have explored leveraging player input to enhance realism in video games, it’s essential to note that the filing of patents does not guarantee their eventual implementation, and many remain speculative.
Patent Details for Sony Machine Learning NPC Software
Referring to Figures 2 and 3 in the Sony patent, the machine learning software is designed to receive data from the playable character’s viewpoint. It duplicates this information to generate a copied display, which is then used to train the AI governing the NPC. This process complements the existing system that involves a “training set,” where data from developer human play sessions is utilized to feed the machine learning model and AI. While similar approaches already exist, they typically rely solely on the playable character’s perspective. In contrast, this software employs machine learning to refine inputs for the NPC perspective, thereby enhancing the realism of AI-controlled movements.
As an illustration, in escort missions within video games, NPCs often mimic player movements precisely, like jumping simultaneously. With this new software, the NPC would jump only when their perspective indicates they are in the same position as the player character when jumping. This has the potential to minimize awkward NPC actions during escort missions, such as NPCs struggling to time a jump over a gap, contributing to a more realistic portrayal in spectator views of NPC characters. Whether this machine learning model will be put into practice remains uncertain, but it offers enthusiasts a glimpse into Sony‘s research and development endeavors.
Referring to Figures 2 and 3 in the Sony patent, the machine learning software is designed to receive data from the playable character’s viewpoint. It duplicates this information to generate a copied display, which is then used to train the AI governing the NPC. This process complements the existing system that involves a “training set,” where data from developer human play sessions is utilized to feed the machine learning model and AI. While similar approaches already exist, they typically rely solely on the playable character’s perspective. In contrast, this software employs machine learning to refine inputs for the NPC perspective, thereby enhancing the realism of AI-controlled movements.