Future of Gaming – Would It Be the Best Unique AI

Is AI Gaming, Future of Gaming, Right now we’re at the start of an AI revolution. You might hear people talking about AI threatening to take over jobs or doing strange things like playing games like the board game alpha go or Starcraft 2 but most people think that the arts and creative fields are outside AI’s capabilities but this is up for debate.

A method called style transfer which usually entails taking the artistic style of a famous painter and transferring it onto a still image or even video has been achieved in the instances of still images. The AI managed to fool 39% of art historians but what about creative AI doing something actually useful that could potentially save millions of dollars and improve an entire industry by leaps and bounds.

Well, this is happening. AI’s creativeness is now moving into the 3d visual arts potentially revolutionizing video game production and graphics. In this article, we’ll take a look at that and some other ways that the AI revolution is seeping into gaming.

What do you think will Ai Gaming be Future of Gaming? Comment!

More Realistic games

Since Crisis from 2007, we haven’t really seen a major step forward in the visual quality of games. Yes, they do look marginally better but it hasn’t been an incredible leap forward. This is about to change with something called a real-time ray tracing.

Ray tracing without going into detail simulates the way light reflects and interacts with surfaces in the real world. It does this by calculating the path that each ray of light would take in a scene. Ray tracing gives an element of realism that’s missing from today’s games. Current games fake the effect of life’s behavior. Ray tracing simulates it. That’s why Pixar movies look so different from playable video games.

ai gaming, future of gaming

So this sounds good but the problem with ray tracing is that it’s very computationally expensive. Traditionally you need a big farm of graphics processors to produce results. It’s definitely something suited to pre-rendered CGI movies and not real-time video games.

This all changed with the NVIDIA RTX graphics card. With this real-time ray tracing is now possible. This was achieved by specialized Hardware tailored for AI doing all of the heavy liftings. The hardware is called ngX and it uses processes that are custom-built for matrix multiplication.

AI learns how light should bounce and simulates this it puts the paths of light where it thinks that it should go. The result is a realistic real-time image that for the first time comes down to the consumer level in 2019. Ray tracing was integrated into Unreal Engine and now the technology can be used in very general cases to highlight just how much of a difference ray tracing makes here’s a video by basildoomHD showing Minecraft modded with ray tracing.

Minecraft is probably one of the worst looking game's graphics wise but with ray-tracing, it actually looks pretty realistic. It doesn’t end there though. Here are some real-time ray tracing scenes of a BMW juxtaposed with a shot of a real BMW.

In the future, this kind of video game quality will become standard. AI is also being used to create more realistic smoke and fluid simulations without the heavy processing that goes with it.

Not only this but a research team at the University of Edinburgh developed a machine learning system that is trained by watching motion capture clips showing various kinds of movement. The system correspondingly then generates an animation which can be anything from a job to a run to hopping over an object to train their system.

We first cut through several long sequences of raw locomotion data at a variety of speeds facing directions and turning angles. We also capture the motion of stepping climbing and running over obstacles placed in the capture studio. By giving us input the height of the terrain under the trajectory our character can adapt to rough terrain climbing balancing and jumping we’re required either using the gamepad or the environment. We can get the character to crouch or move under obstacles. It can additionally be forced into certain environments such as walking over a beam etc.

Researchets at University of Edinburgh

A new method of character control using a phase function neural network can produce high-quality motion complex controlled tasks such as walking over rough terrain it’s fast compact stable and can learn from a large amount of data.

FUTURE OF GAMING, AI Gaming

This takes the tedious work out of creating realistic movements of videogame characters in a free market whatever is cheaper faster and better will eventually become the standard. The same applies to the gaming industry’s less style of game development. It’s no secret that the cost of video games is going up. But why is this?

Production costs of Games will be cheaper

Once upon a time, major games could be made with a small number of people this is because the assets or 3d components of the game were simple. As time went on games strove to chase realism as this happened the components of the game became ever more complex it’s gotten to the point where it now takes a large team and the budget of a blockbuster movie to complete a major video game title.

Andrew price a video game artist conducted a talk at a blender conference in 2018 in the talk he produced a realistic estimate of how exactly things get so expensive. One detailed 3d building can take 22 hours to create 12 hours for the modeling stage and another 10 hours for the texturing process. But the job is far from complete at this stage there’s usually a revision of two to four times depending on what’s needed within the game’s story. The end process can be 44 to 88 hours of work just for a single building within a game. At a wage of $60 per hour the average cost of one building is around $4,000 when you add up all the costs, it becomes comical. A single scene can cost around $200,000 so a full game would have no problem reaching the hundreds of millions of dollars mark.

Currently, video game artists use a technique that is a static flow from the design of the element to the output that is the completed product is a one-to-one ratio. To create a new 3d asset be it a car or building requires the designer to start almost completely from scratch. Will this would be able to be the future of gaming?

ai gaming, future of gaming

This is where AI comes in. AI particularly neural networks are suited to a kind of workflow called procedural content generation this is basically where a neural network is trained to create realistic buildings cars or audio from vast volumes of data over time. It learns what looks good and what’s realistic.

With procedural content generation, a game designer simply has to type in certain parameters such as a building should be of a certain type have X amount of windows and B of height within a certain range. Once given these parameters the AI then generates a whole bunch of options and the designer simply has to pick one with little or no modification and This is where the advantage comes in if another 3d asset is needed. Instead of starting from scratch the designer simply puts in the parameters and the AI generates it again countless hours are saved even though to train the AI in the first place might take a while. The same process can be done for natural landscapes or even video game characters themselves. With all these will ai gaming would be future of gaming? What are your thoughts?

To top things off what if an AI could create a whole video game by itself?

Well, of course, this has already been done.

Recently researchers at Georgia Tech took things our step forward by using a general adversarial network a type of neural network which was deployed to invent new games. In their paper, they use video game levels from already developed games as inputs and then converted them into an output that lays out environments objects and rules for the new game. This particular system learned from two games Super Mario Brothers and Kirby’s Adventure and the output turned out to be a game similar to Megaman.

This article shines a light on a side of AI that isn’t really talked about. AI is not only creative but actually can be practically useful. Will this be the future of gaming comment down your thoughts.

AI is going to have a drastic effect on the future of gaming. They’ll look better, be more realistic and be much cheaper to produce.

VR in Gaming

Over the past few years, virtual reality has desperately tried to cement itself firmly in our gaming universe and while the tech has gotten better and better it’s still not that popular. The likes of HTC Sony and oculus continue to develop new games and evolve headset technology but the fact still remains whenever you play you are always on your own until now because a startup called sandbox is hoping to change all that.

FUTURE OF GAMING
SANDBOX VR

The company uses a haptic vest for the procedure. These haptic vests have 40 sensors sewn into the fabric which create small vibrations to mimic the physical effects of the game such as contact with aliens impact and explosions so etc but where sandbox really differs from other VR experiences is in its motion capture technology which enables the game to follow each players movements. Do you think this would be future of gaming, comment down!

The trend doesn’t end here, Check out this cool video about Valves’ index VR headset and see what I’m talking about.

What role does Cloud play in the future of Gaming?

Cloud Gaming is one of the most trending topics in the gaming industry and many are saying that it would be the future of gaming with ai gaming. But how many of you understand it?

Cloud gaming is streaming games directly to your pc, by just using the internet. A good example of this would be Google Stadia which generated the hype of it. Many experts were saying that it will be the future of gaming.

Many companies have been in the race of developing cloud play like Sony, Microsoft, Google etc. There is a big list regarding this.

But there is a drawback of Cloud Gaming, It requires a great internet connection, which is some African and Asian Country is still not available.

[/pl_text]
[/pl_col]
[/pl_row]

Maneesh Ashuthosh

Maneesh Ashuthosh is a B.tech computer science student, a tech enthusiast, a UI/UX designer and the editor of Tech Unaltered. He is an avid observer of all news related to tech and doesn't miss any opportunity to share the ones he loves with the readers.

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!