ChatGPT will get even better. Here’s what we know about the new GPT-4 neural network

OpenAI has created a multi-modal artificial intelligence model GPT-4. It uses developments from the previous version, GPT 3.5, and also implements several new technologies. The full presentation of GPT-4 will take place next week, but preliminary information about the capabilities of this AI model is already available.

GPT-4 provides several ways to interact with a person – not only by entering text, but also by voice communication, sending an image, sound or video. AI will be able to both perceive information in this form and reproduce it in the same format. ChatGPT responses using the GPT-4 model will be even more human, that is, more close to the speech that people usually use when communicating with each other. GPT-4 will support multiple languages.

OpenAI also plans to release a ChatGPT mobile app based on the GPT-4 neural network. Now you can communicate with this chatbot only in the browser, although there are unofficial applications and telegram bots that use the GPT 3 and 3.5 models. In addition, Microsoft, which has invested ten billion dollars in the development of OpenAI, will launch an updated version of the Bing search engine, which will use GPT-4. This same AI model can be used in some other Microsoft products, including the Windows 11 operating system.