Note that the AI is trained on all six levels simultaneously and that you do not need to start the game manually, Make sure that the game is in windowed mode and VSync is disabled. In order to train your own AI run trainer.py with admin privileges. The level being played as well as other parameters can be adjusted within the script. Then, start the game in windowed mode and execute the eval.py script, both with admin privileges. In order to run the AI, first download the pretrained network super_hexagon_netĪnd place it in the main folder (i. The following python libraries are required: In theory, this library should also work for other games written with OpenGL. the current time is incremented by 1/fps every time GameInterface.step is called.įor more implementational details see RLHookLib/PyRLHook/GameInterface.cpp (especially the constructor and the method step)Īs well as RLHookLib/RLHookDLL/GameInterfaceDll.cpp (especially the methods onAttach, initFromRenderThread, and hWglSwapBuffers). Run_afap makes the game think it runs with the specified FPS i. if the factor is 0.5 the game runs at half the speed. Set_speed adjusts the perceived time by the given factor. The time perceived by the game can be adjusted with the methods t_speed(double factor) and n_afap(double fps). Then the back buffer is copied into a shared memory space in order to be returned by GameInterface.step. The this releases the lock until wglSwapBuffers is called again. If one wants to advance the game for one step and retrieve the next frame GameInterface.step can be called from python. The wglSwapBuffers hook first locks the games execution. The function wglSwapBuffers is called every time the game finishes rendering a frame in order to swap the back and front buffer. This DLL hooks into the OpenGL function wglSwapBuffers as well as the system calls timeGetTime, GetTickCount, GetTickCount64, and RtlQueryPerformanceCounter. To do so, the library injects a DLL into the game's process. Secondly, it intercepts the system calls used to get the system time such that the game can be run at a desired speed. This library serves two functions:įirstly, it efficiently retrieves the frames and sends them to the python process. In order to efficiently train the agent, a C++ library was written. However, the noise is turned off after 500,000 training iterations.Noisy networks facilitate the exploration process. ![]() Distributional RL with quantile regression gives similar results.The distributional approach significantly increases the performance of the agent.However, after roughly 300,000 training steps the agent trained without prioritized experience replay performs better Prioritized experience replay at first performs better,.n-step significantly decreased the performance. ![]() Double Q-Learning and Dueling Networks did not improve the performance.All six Rainbow extensions have been evaluated.The used hyperparameters can be found at the bottom of trainer.py below if _name_ = '_main_':.See superhexagon.SuperHexagonInterface._preprocess_frame for more implementational details.Such that the walls and the player belong to the foreground and everything else belongs to the background Additionally, a threshold function is applied to the frame.See utils.Network for more implementational details. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |