Task engine for summarizing text using LLaMA 3
To use the meta-llama/Meta-Llama-3-8B-Instruct model, you need a Hugging Face token.
-
Get Your Token:
- Log in to Hugging Face, go to Settings > Access Tokens, and generate a token.
-
Option 1: Store Locally:
- Run this command and enter your token when prompted:
huggingface-cli login
- Run this command and enter your token when prompted:
-
Option 2: Use an Environment Variable:
- Set the
HF_TOKENvariable before running the script:export HF_TOKEN=your_token_here python summarization.py - On Windows (Command Prompt):
set HF_TOKEN=your_token_here python summarization.py
- Set the
Note: Do not include your token in this repository. It’s private and should be managed locally or via environment variables.