Fabrity's blog
Browse by subject:
LLM evaluation benchmarks—a concise guide
Discover what LLM evaluation benchmarks are, why they matter, and how they help determine which model truly stands out based on performance metrics.
Leveraging LLM function calling to harness real-time knowledge
Discover how LLM function calling enables real-time knowledge access, enhancing generative AI for more accurate and effective business solutions.
Will large context windows kill RAG pipelines?
Will the large context windows in new LLMs make RAG pipelines obsolete? Read the full article to find out.
What is synthetic data and how it can help us break the data wall?
Explore how synthetic data can overcome AI’s data wall, enhancing model training and innovation in various industries.
Boosting productivity with an AI personal assistant—three real-life use cases
Learn practical ways an AI personal assistant boosts productivity in your organization. Read about three real-life examples from Fabrity’s experience.
RAG vs. fine-tuning vs. prompt engineering—different strategies to curb LLM hallucinations
RAG vs. fine-tuning vs. prompt engineering—unsure what strategy to choose? Read this article to explore the pros, cons, and best use cases.