What is retrieval-augmented generation?
FAQs
What are hallucinations?
Hallucinations are fabricated statements generated by artificial intelligence that sound believable but are completely invented. They occur because AI models generate text based on likely word sequences patterns, without the ability to distinguish between true and false information.
What is a RAG model?
Although you may hear the term “RAG model,” RAG is not actually a model but an architectural framework that expands the native capabilities of LLMs by providing them with external knowledge sources to answer questions.
What is the difference between a foundation model and a large language model?
A foundation model is a large, pre-trained model that can be used for various tasks across different areas, like language, images, and more. Foundation models are versatile tools that can be fine-tuned for specific needs.
An LLM is a type of foundation model, but it's specifically designed for language tasks like text generation, translation, and answering questions.
In short, all LLMs are foundation models, but foundation models can be used for many different tasks, not just language.