4.2 Large Language Model Based AI

Our goal is to develop a Large Language Model Based MOBA AI that can understand the players' gaming processes and provide corresponding suggestions. Furthermore, we aim to enable it to operate the game authentically and even surpass top human players. Such AI can assist players in analyzing game data, uncovering player behavior patterns, thus enhancing gaming experience and competitive levels. Developing MOBA game AI is not a simple task. The following technical roadmap combines our data resources and the capabilities of Large Language Models.

Firstly, to enable the AI to comprehend gaming tasks, our stored game recordings include player inputs. After client-side frame calculation, we obtain a time-series of states (including position, health, level, etc.). The raw data comprises 20 frames per second, which we initially slice into different time resolutions (e.g., 1 second, 10 seconds). We pre-generate high-quality game states and enhance them through Large Language Models. Then, we pair them with state files to form tokenized time-series, employing Transformer-like training methods to train the AI interpreter.

The second task resembles time-slicing high-level player operations. We pre-generate operation strategy descriptions (e.g., jungle farming, dragon killing, tower pushing, attack, and retreat) and iterate them through Large Language Models. Our aim is to train an encoder that maps operation sequences to strategy descriptions and a decoder that maps internal strategies to operation sequences. This task can iterate together with training the AI interpreter.

An AI agent capable of gaming essentially maps the time-state space to the operation space. Both state and operation spaces are represented through tokenization as per the aforementioned training. We use high-quality agents formed from high-quality matches as samples and iterate through diffusion methods for training.

We have completed data collection and pre-processing, establishing crucial data foundations. The fixed length of frame segments ensures data consistency, and high-quality matches can be directly obtained from high-scoring players' matches. Utilizing Large Language Models can significantly reduce manual annotation workload and alleviate the difficulty in designing encoders and decoders.

Last updated