Gigachat 2.0: Powerful Neural Network Assistant now available for all users

गीगाचैट 2.0 : शक्तिशाली न्यूरल नेटवर्क एसिस्टेंट अब सभी यूजर्स के लिए उपलब्ध

Moscow, 14 April (IANS). Gigachat 2.0 of SBER is now available to all users. Announcing this on Monday, the company said that due to the new approach of training, all the skills of the model have increased significantly.

According to the company, artificial intelligence (AI) has learned to identify audio files, analyze user’s requests, process large amounts of text and recognize images.

All gigachat features are available in one product and on any interface, so the user does not need to switch between different services.

The model range includes two versions: Gigachat 2 Pro and Gigachat 2 Max. Max is the most advanced model to solve complex and professional tasks, while Pro is suitable for quick and high-quality solutions of everyday tasks, from receiving and editing various questions, from receiving answers to text and editing.

According to the company, Gigachat 2.0 now knows how to work with current data from the Internet. It analyzes the questions more deeply and provides a brief answer with the link to the sources. Artificial intelligence finds information for the user, filters the most relevant information, and supports its findings with a link that can be used when the user needs additional information.

For example, you can ask the model: “Where to go to St. Petersburg with children aged 7 and 12 this weekend?”; “How much will it cost to repair a standard apartment with a room in Moscow?”.

Now you can work on many files in the same convention. Documents up to 200A4 pages can be uploaded in the chat. Prompt Sample: “What should I pay attention to in the lease agreement? Focus on the laws of the Russian Federation”. You should also attach the contract.

Gigachat 2.0 funds audio files fundamentally on a new level. The model understands audio data directly, without any intermediate conversion. This makes it possible to highlight the main points more accurately and answer questions about the material.

The company said, “Just attach a recording and prepare a query. It supports files up to 60 minutes long and 30 MB. And if typing is inconvenient or impossible, you can record a voice message. Gigachat 2.0 can communicate in different languages, can understand complex words better, and also recognize the spoken language and music.”

Sample Prompt: “Listen to audio recording and tell me what my colleague would not have liked in my words”; “Write a list of medicines and recommendations from my doctor’s voice message”; “Listen to video call recording and write whatever was said about outdoor advertisements”; “Help me preparing my speech for project presentation. (Text to Speech)”.

Now you just have to upload links to your intended content, and Gigachat will take out important information. The model website creates a brief summary of the content, compares articles on the same subject, works with several links simultaneously, and recognizes images from websites.

Sample Prommput: “Help me in preparation for interview for this job.”

The video can also process the video from the Gigachat 2.0 link. Understanding the audio track, the model may explain the main point of the video essay or answer questions about lecture (also works with English and other languages). Sample signal: “What is this video about? Link”.

The ability to make music and songs by the Text Prompt with Gigachat has reached a new level. The length of the maximum song is now up to 3 minutes, and the time of the generation is the same (about 1 minute). The team has improved the relevance of the final generation, sound quality for the prompt and improved the construction of songs in the Chinese language.

Sample Prompt: Click “Genre a Song”, enter the lyrics or theme of the song to be produced, choose a style or tell about your own style, for example: “A song in the style of modern youth pop music. A song in the style of modern youth pop music. Use Pulsating Bass, Bright Synthes and a tight beat”.

The model can now analyze and achieve more useful information from an image and give more accurate answer to its content. For example, it can recommend on which style of clothes it should choose for a particular case, can help solve the equation from a textbook or interpret the results of medical examination.

Sample Prommput: “I have received a bill for housing and utility. Can you tell what I am paying for?”

For the first time in Russia, the smart speaker has been fully integrated with a large language model, making their intellectual abilities to a new level.

The Gigachat communicates with the user live in their understandable language or in each role, causing the conversation to up to 10 times long.

For example, it can explain the principle of relativity to a child in simple terms or tell the weather forecast on behalf of a movie award preference.

Now this artificial intelligence not only manages the dialogue, but also apply skills like music or reminder. You can also set several commands simultaneously in a query, and the speaker will switch between them independently.

The conversation with the Assistant is also now in line with the user’s choice, with 18 combinations available, including communication style, the voice of the Assistant, formal or informally addressing the user.

Prommput Sample: “Hi, I have created a giraffe, but it looks boring. What can I add to it?”, “Salute, explain the principle of relativity to a seven -year -old child”, “Salute, set your alarm for six o’clock in the morning and play some workout music.”

Gigachat 2.0 is the first of which one of the platforms on Max by VK. This is an application that has a built-in messenger, mini-app, chatbot builder, online registration system and payment service. The company said, “Using SBER’s neural network model, Max users can make text and draw, audio can transfer, videos, articles and many questions can get short retailing of answers to many questions. To evaluate Gigachat’s capabilities, you have to find @gigachat and then follow the instructions.”

-IANS

Acade/

Exit mobile version