Accelerating development with the power of AI

Software Development

Setting Up Meta’s Llama 2 on Your Local Machine

January 22, 2025

2 mins read

Share this article

In the realm of natural language processing (NLP), Meta’s Llama 2 has emerged as a formidable contender, offering unparalleled capabilities in understanding and generating human-like text. Whether you’re a developer, researcher, or AI enthusiast, setting up Llama 2 locally on your machine can unlock a new horizon of possibilities.

Hardware Requirements

Before diving into the setup process, it’s crucial to ensure your system meets the hardware requirements necessary for running Llama 2. These include:

 

    • CPU: Intel i5/i7/i9 or AMD Ryzen equivalent, with at least 4 cores for optimal performance.
    • RAM: Minimum of 16GB, though 32GB or more is recommended for handling larger models or extensive datasets.
    • Storage: At least 10GB of free space for the installation and additional space for datasets.
    • GPU: Optional but highly recommended for accelerating computations. NVIDIA GPUs with CUDA support are preferred.

1. Setting Up on Mac/Linux

Step 1. Install Homebrew: Open the Terminal and run the following command to install Homebrew, a package manager for macOS

				
					/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
				
			

Step 2. Install Python and Git: Use Homebrew to install Python and Git with the command

				
					brew install python git
				
			

Step 3. Clone Llama 2 Repository: Navigate to the directory where you want to install Llama 2 and clone the repository from GitHub

				
					git clone https://github.com/meta/llama2.git
				
			

Step 4. Install Dependencies: Navigate into the cloned repository directory and install the required Python dependencies

				
					cd llama2
pip3 install -r requirements.txt
				
			

2. How to run Llama 2 using Ollama

Ollama stands out for its simplicity, cost-effectiveness, privacy, and versatility, making it an attractive alternative to cloud-based LLM solutions. It eliminates latency and data transfer issues associated with cloud models and allows for extensive customization. It is currently available on Mac, Linux and Windows.

Setting Up Ollama on Mac

Step 1. Visit the Ollama website and download the Ollama dmg package

Step 2. Install one of the available llama models that Ollama currently supports. Simply run this command in your Mac Terminal

				
					ollama run llama2
				
			

If you want to test out the pre-trained version of Llama 2 without chat fine-tuning, use this command:

				
					ollama run llama2:text
				
			

There are many versions of Llama 2 that Ollama supports out of the box. Depending on the parameters and system memory, select one of your desired option:

 

  • 7b models generally require at least 8GB of RAM
  • 13b models generally require at least 16GB of RAM
  • 70b models generally require at least 64GB of RAM

Commonly speaking, for users with a modest local environment, it is suggested to run the 7B-Chat_Q4_K_M model.

				
					ollama run llama2:7b-chat-q4_K_M
				
			

By ByteTuned Editorial Team

ByteTuned is an award-winning development agency, trusted globally for best-in-class execution in web, mobile, and team extension projects.

In this article

Don't miss these

Ready to have a conversation?

Let’s explore how we can help accelerate software development at your company.

Scroll to Top