Videos » How We're Building AI Search Engines using LLM Embeddings

How We're Building AI Search Engines using LLM Embeddings

Posted by admin
A lot of people have asked us for ideas of how they can leverage Large Language Models (LLMs) for their business applications. A common example is to use the native language comprehension capabilities of LLMs to find matching content. This makes LLMs an excellent tool for search! In this video, ThinkNimble CTO William Huster demonstrates a prototype application that enables searching for job descriptions using an unstructured, English-language description of a job seeker. The code for this demo can be found here: - https://github.com/thinknimble/embeddings-search-demo Chapters 00:00 Intro - Why Build an LLM-based Search Engine? 01:00 Demo of Searching Job Descriptions 01:46 What is an Embedding? 03:06 Search by Meaning, not Content 03:52 Search with Unstructured Data 05:10 How Search with Embeddings Works 06:01 Set Up Database, Data Models, and Data 08:33 Generating Embeddings for JDs 11:04 How the Search Code Works 12:05 Creative Ways to Use Search Results 12:37 Outro - Other Use Case Examples 13:40 Outro - Final Words Technologies used in this demo: - Django - PostgreSQL + pgvector - Python sentence-transformers library Links and Resources: - https://www.sbert.net/ - Sentence Transformers package for Python - https://github.com/pgvector/pgvector - https://www.djangoproject.com/ If you're looking for a technical team to integrate AI into your business, email hello@thinknimble.com
Posted October 6, 2023
click to rate

Embed  |  231 views