OLLAMA | Want To Run UNCENSORED AI Models on Mac (M1/M2/M3)
OLLAMA | How To Run UNCENSORED AI Models on Mac (M1/M2/M3)
One sentence video overview: How to use ollama on a Mac running Apple Silicon.
π What You'll Learn:
* Installing Ollama on your Mac M1, M2, or M3 (Apple Silicon) - https://ollama.com
* Downloading Ollama models directly to your computer for offline access
* How to use ollama
* How to harness the power of open-source models like llama2, llama2-uncensored, and codellama locally with Ollama.
Chapters
00:00:00 - Intro
00:00:15 - Downloading Ollama
00:01:43 - Reviewing Ollama Commands
00:02:29 - Finding Open-Source Uncensored Models
00:05:39 - Running the llama2-uncensored model
00:07:25 - Listing installed ollama models
00:09:18 - Removing installed ollama models
π¦ Ollama Commands:
View Ollama Commands: ollama help
List Ollama Models: ollama list
Pull Ollama Models: ollama pull model_name
Run Ollama Models: ollama run model_name
Delete Ollama Models: ollama rm model_name
πΊ Other Videos you might like:
πΌοΈ Ollama & LLava | Build a FREE Image Analyzer Chatbot Using Ollama, LLava & Streamlit! https://youtu.be/1IosVm-OERs
π€ Streamlit & OLLAMA - I Build an UNCENSORED AI Chatbot in 1 Hour!: https://youtu.be/vDD_L0ab-FY
π Build Your Own AI π€ Chatbot with Streamlit and OpenAI: A Step-by-Step Tutorial: https://youtu.be/UKclEtptH6k
π Links
Ollama - https://ollama.com
Ollama Models - https://ollama.com/models
π§βπ» My MacBook Pro Specs:
Apple MacBook Pro M3 Max
14-Core CPU
30-Core GPU
36GB Unified Memory
1TB SSD Storage
βΉοΈ Other info you may find helpfulπ
Can you run LLM tool on your computer: https://huggingface.co/spaces/Vokturz/can-it-run-llm
Remember that you will need a GPU with sufficient memory (VRAM) to run models with Ollama. If you are unsure how much GPU memory you need you can check out a calculator HuggingFace created called "Model Memory Calculator" here https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator
Also, here is an article that runs you through the exact mathematical calculation for "Calculating GPU memory for serving LLMs" - https://www.substratus.ai/blog/calculating-gpu-memory-for-llm.
_____________________________________
π https://www.youtube.com/channel/UCuuySIxs4zmMhH7Q-obyk_Q?sub_confirmation=1 Subscribe to our channel for more tutorials and coding tips
π Like this video if you found it helpful!
π¬ Share your thoughts and questions in the comments section below!
GitHub: https://github.com/AIDevBytes
π My Goals for the Channel π
_____________________________________
My goal for this channel is to share the knowledge I have gained over 20+ years in the field of technology in an easy-to-consume way. My focus will be on offering tutorials related to cloud technology, development, generative AI, and security-related topics.
I'm also considering expanding my content to include short videos focused on tech career advice, particularly aimed at individuals aspiring to enter "Big Tech." Drawing from my experiences as both an individual contributor and a manager at Amazon Web Services, where I currently work, I aim to share insights and guidance to help others navigate their career paths in the tech industry.
_____________________________________
#ollama #mac #apple #llama2 #aichatbot #ai
Posted May 22
click to rate
Share this page with your family and friends.