There is a myth that AI can do anything you can imagine. But it’s more like a child that learns new things every day. So when it grows up, it becomes a beast and a “know-it-all.” For instance, recently, some gamers noticed that the AI is quite erratic in describing how to hold the controller, which highlights the fundamental flaw of AI at this point: it is effective at summarizing and improving on existing material but fails miserably when it ventures into uncharted territory.
Some gamers think that the training errors are responsible for the AI’s failure. Recently, some users tried to ask Midjourney to generate photos of a “female internet celebrity happily playing PS5.” But as the photo below shows, AI doesn’t understand how to hold the joystick properly.
In another photo (below), we see a girl holding the joystick. But we think the learning material of the AI was related to the well-known example of the “armor core” grip. That’s why she is holding it in the wrong way.
There are guesses that someone duplicated a particular library while teaching the AI, and that library has a peculiar AC grip. Alternatively, the model in the gallery may not be a player, but the AI may have learned the wrong grip from the model, leading to the AI’s manic behavior. There are no mistakes in the hands, suggesting that AI is really getting better.
In addition, computer and electrical engineering students at Stanford University have developed a chat advisor using open-source AR glasses and ChatGPT4 that can instantly respond to a variety of challenging questions, preventing users from being at a disadvantage in dating or job interviews.
The Aura GPT (rizzGPT) project uses single-chip AR projection hardware and OpenAI-based speech recognition software. The latter is responsible for listening to questions and entering them into ChatGPT, while the former produces results that the user can see through the glasses.