Teaching Curiosity-Driven AI to Play Games
Subscribe:  iCal  |  Google Calendar
Moscow RU   16, Oct — 18, Oct
Helsinki FI   17, Oct — 25, Oct
Minsk BY   17, Oct — 19, Oct
London GB   22, Oct — 23, Oct
Singapore SG   23, Oct — 25, Oct
Latest comments
by Taylor
10 hours ago

bad management, its your job for stuff like that not to happen, dont put that extra weight on artist because management didn't do your job

by Robert Gardner
11 hours ago

It really is the best game of 2018, Thank you.

"We're saddened if any former members of any studio did not find their time here enjoyable or creatively fulfilling and wish them well with finding an environment more suitable to their temperaments and needs…" Or : We're saddened if any former members of our studio are not happy to have been exploited to enrich us. Awesome !!!! Ok, guys… you have lost one customer !

Teaching Curiosity-Driven AI to Play Games
25 June, 2018

Take a look at another work that explains a way we can use AI to play games. The thing is that this one focuses on curiosity. How can a machine be curious? In this case, curiosity is defined by the AI’s ability to predict the results of its own actions. And this is big because the AI has the tools to acquire skills that don’t seem necessary now but might be in the future. 

First, let’s start by check it out a video on the paper by Two Minute Papers:


In many real-world scenarios, rewards extrinsic to the agent are extremely sparse or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent’s ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces, like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoomand Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. 

The paper “Curiosity-driven Exploration by Self-supervised Prediction” and its source code can be found here.

Leave a Reply

Be the First to Comment!

Related articles