GPT-3 has been trained and tested on a whopping 175 billion parameters, and it is over 10x the size of its predecessor, GPT-2
OpenAI, an artificial intelligence research laboratory in San Francisco, recently has announced that it is rolling out the most powerful and innovative language model, GPT-3 (Generative Pre-trained Transformer 3). It is said to be the third version release and the upgraded version of GPT-2. This latest version will take the GPT model to a whole new dimension as it's been trained and tested on an enormous 175 billion parameters, and it is over 10x the size of its predecessor, GPT-2.
OpenAI has stated that the 175-billion parameter deep learning model will produce human-like text, and it was also trained on large text datasets that included hundreds of billions of words. The version 3 language model was more revolutionary than the predecessor version, and it can also handle more niche topics.
OpenAI's GPT-2 had poor performance and less efficiency when used in specialized areas such as music and storytelling. But the latest GPT-3 can now go further and can be used to perform tasks such as answering questions, writing essays, language translation, and generating computer code.