Sam Altman, chief executive officer of OpenAI, unveiled a novel iteration of ChatGPT that promises to adhere to instructions more precisely and access a more recent database of data
At its most recent developer conference, OpenAI unveiled several new features for ChatGPT and other artificial intelligence tools.
Two of the most critical announcements from the company’s event are the GPTs (generative pre-trained transformers), a forthcoming creator tool designed for chatbots, and the GPT-4 Turbo, an upgraded model for ChatGPT.
It has occurred previously that OpenAI has introduced a new model to ChatGPT. The OpenAI ChatGPT algorithm was upgraded from GPT-3.5 to GPT-4 earlier this year.
Are you intrigued about how the GPT-4 Turbo iteration of the chatbot, which will be released later this year, will differ? Based on past product cycles, it will likely be made available to ChatGPT Plus subscribers before its general release.
Although OpenAI denied WIRED’s request for early access to the new ChatGPT model, the following describes how GPT-4 Turbo is anticipated to differ.
The New Knowledge Limit
Bid farewell to the persistent notification from ChatGPT that the latest data is only available until September 2021. “We are probably even more irritated than you all that GPT-4’s knowledge of the world expired in 2021,” OpenAI CEO Sam Altman stated at the conference.
Since the new model is updated with data through April 2023, it can respond to your inquiries with more current context. Altman declared his resolve to prevent ChatGPT’s information from becoming soiled again.
How this information is acquired continues to be a significant source of dispute among authors and publishers dissatisfied with OpenAI’s unauthorized use of their work.
Provide Extended Prompts
Feel free to use extremely lengthy and specific prompts! Altman stated, “GPT-4 Turbo supports up to 128,000 context tokens.” Altman estimated the new limit to be approximately the number of words found on 300 book pages, even though tokens do not correspond to the maximum number of words that can be included in a prompt.
Suppose you want the chatbot to summarize a lengthy document it analyzes; with GPT-4 Turbo, you can now input more information simultaneously.
Improved Instruction After
It would be preferable if ChatGPT demonstrated a greater capacity for meticulously examining the nuances of the information provided in a prompt. According to OpenAI, the new model will be a more attentive listener.
The company’s blog post states, “GPT-4 Turbo outperforms our previous models on tasks that require precise adherence to instructions, such as generating specific formats (e.g., ‘always respond in XML’).” This could be especially beneficial for individuals who utilize the chatbot to assist them in writing code.
Reduced Costs for Developers
The cost of utilizing OpenAI’s application programming interface may not be immediately apparent to most ChatGPT users, but it can be pretty high for developers.
Altman stated,
“Therefore, one cent is charged for one thousand prompt tokens, and three cents is charged for one thousand completion tokens.”
Sam Altman
This implies that developers may incur lower costs when utilizing GPT-4 Turbo to input data and obtain responses.
Numerous Tools in a Single Chat
Individuals who are ChatGPT Plus subscribers may be acquainted with the GPT-4 navigation menu, which allows them to choose which chatbot tools to implement.
To illustrate, one might select the Dall-E 3 beta iteration to utilize AI-generated images, while the Browse with Bing version is more suitable for accessing external links.
Eventually, that navigation menu will be relegated to the software graveyard. We have taken note of your feedback. “That model selector was incredibly bothersome,” Altman remarked.
The updated chatbot equipped with GPT-4 Turbo will select the appropriate tools; for instance, if you ask for an image, it is anticipated to respond with Dall-E 3.