- OpenAI introduces GPT-4 and GPT-3.5-turbo with function calling capability, enabling developers to generate code and execute programming functions.
- GPT-3.5-turbo now offers an expanded context window, improving the model’s ability to retain and process relevant information.
- OpenAI reduces pricing for GPT-3.5-turbo and text-embedding-ada-002, making these AI models more affordable for developers.
OpenAI, a leading artificial intelligence (AI) research lab, is making waves in the competitive generative AI landscape with the introduction of upgraded text-generating models and a reduction in pricing.
Today, OpenAI announced the release of two new text-generating AI models: GPT-3.5-turbo and GPT-4. The star feature of GPT-4 is its innovative capability called function calling. OpenAI explains that function calling allows developers to describe programming functions to the models, enabling them to generate code and execute those functions. This breakthrough functionality opens up endless possibilities, from building chatbots that utilize external tools to answer questions, to converting natural language into database queries and extracting structured data from text. OpenAI emphasizes that these models have been fine-tuned to recognize when a function needs to be called and respond with JSON-formatted output that adheres to the function signature. The introduction of function calling greatly enhances developers’ ability to obtain structured data from the models.
OpenAI is also revolutionizing the context window feature with the release of an enhanced version of GPT-3.5-turbo. The context window refers to the amount of text the model considers before generating additional content. Previous models with limited context windows often veered off-topic, losing track of recent conversation content. The new GPT-3.5-turbo now offers a context length four times that of its predecessor, allowing it to process up to 16,000 tokens. However, this improved capability comes at a slightly higher price, with rates set at $0.003 per 1,000 input tokens and $0.004 per 1,000 output tokens. While this falls short of the processing capabilities of certain rival models, OpenAI is already testing a limited release version of GPT-4 with a context window capable of handling 32,000 tokens.
In a surprising move, OpenAI has reduced the pricing for its original GPT-3.5-turbo model by 25%. Developers can now access this powerful AI model at a reduced rate of $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens. This revision translates to approximately 700 pages per dollar, making it a more cost-effective solution for developers seeking advanced text generation capabilities.
Additionally, OpenAI has implemented significant price reductions for text-embedding-ada-002, a widely popular text embedding model. Text embeddings measure the relatedness between different text strings, playing a crucial role in search engines and recommendation systems. OpenAI has slashed the pricing for text-embedding-ada-002 by a staggering 75%, now priced at an affordable $0.0001 per 1,000 tokens. The reduction in cost is attributed to OpenAI’s increased efficiency in its systems, demonstrating the organization’s commitment to optimizing research and infrastructure expenditures.
OpenAI’s strategic focus has shifted towards incremental updates to existing models rather than developing entirely new models from scratch, following the successful release of GPT-4 in March. CEO Sam Altman reaffirmed this approach at a recent conference hosted by the Economic Times, emphasizing that OpenAI is still engaged in groundwork before starting development on the next model iteration.